Git Product home page Git Product logo

drpc's Introduction

DRPC

A drop-in, lightweight gRPC replacement.

Go Report Card Go Doc Beta Zulip Chat

Links

Highlights

  • Simple, at just a few thousand lines of code.
  • Small dependencies. Only 3 requirements in go.mod, and 9 lines of go mod graph!
  • Compatible. Works for many gRPC use-cases as-is!
  • Fast. DRPC has a lightning quick wire format.
  • Extensible. DRPC is transport agnostic, supports middleware, and is designed around interfaces.
  • Battle Tested. Already used in production for years across tens of thousands of servers.

External Packages

  • go.bryk.io/pkg/net/drpc

    • Simplified TLS setup (for client and server)
    • Server middleware, including basic components for logging, token-based auth, rate limit, panic recovery, etc
    • Client middleware, including basic components for logging, custom metadata, panic recovery, etc
    • Bi-directional streaming support over upgraded HTTP(S) connections using WebSockets
    • Concurrent RPCs via connection pool
  • go.elara.ws/drpc

    • Concurrent RPCs based on yamux
    • Simple drop-in replacements for drpcserver and drpcconn
  • Open an issue or join the Zulip chat if you'd like to be featured here.

Examples

Other Languages

DRPC can be made compatible with RPC clients generated from other languages. For example, Twirp clients and grpc-web clients can be used against the drpchttp package.

Native implementations can have some advantages, and so some support for other languages are in progress, all in various states of completeness. Join the Zulip chat if you want more information or to help out with any!

Language Repository Status
C++ https://github.com/storj/drpc-cpp Incomplete
Rust https://github.com/zeebo/drpc-rs Incomplete
Node https://github.com/mjpitz/drpc-node Incomplete

Licensing

DRPC is licensed under the MIT/expat license. See the LICENSE file for more.


Benchmarks

These microbenchmarks attempt to provide a comparison and come with some caveats. First, it does not send data over a network connection which is expected to be the bottleneck almost all of the time. Second, no attempt was made to do the benchmarks in a controlled environment (CPU scaling disabled, noiseless, etc.). Third, no tuning was done to ensure they're both performing optimally, so there is an inherent advantage for DRPC because the author is familiar with how it works.

Measure Benchmark Small Medium Large
gRPCDRPCdelta gRPCDRPCdelta gRPCDRPCdelta
time/op Unitary 29.7µs8.3µs-72.18% 36.4µs11.3µs-68.92% 1.70ms0.54ms-68.24%
Input Stream 1.56µs0.79µs-49.07% 3.80µs2.04µs-46.28% 784µs239µs-69.48%
Output Stream 1.51µs0.78µs-48.47% 3.81µs2.02µs-47.06% 691µs224µs-67.55%
Bidir Stream 8.79µs3.25µs-63.07% 13.7µs5.0µs-63.73% 1.73ms0.47ms-72.72%
speed Unitary 70.0kB/s240.0kB/s+242.86% 56.3MB/s181.1MB/s+221.52% 618MB/s1939MB/s+213.84%
Input Stream 1.28MB/s2.52MB/s+96.11% 540MB/s1006MB/s+86.16% 1.34GB/s4.38GB/s+226.51%
Output Stream 1.33MB/s2.57MB/s+93.88% 538MB/s1017MB/s+89.14% 1.52GB/s4.68GB/s+208.05%
Bidir Stream 230kB/s616kB/s+167.93% 149MB/s412MB/s+175.73% 610MB/s2215MB/s+262.96%
mem/op Unitary 9.42kB1.42kB-84.95% 22.7kB7.8kB-65.61% 6.42MB3.16MB-50.74%
Input Stream 465B80B-82.80% 7.06kB2.13kB-69.87% 3.20MB1.05MB-67.10%
Output Stream 360B80B-77.81% 6.98kB2.13kB-69.52% 3.20MB1.05MB-67.21%
Bidir Stream 1.09kB0.24kB-77.94% 14.4kB4.3kB-69.90% 6.42MB2.10MB-67.22%
allocs/op Unitary 1827-96.15% 1849-95.11% 2809-96.79%
Input Stream 111-90.91% 122-83.33% 39.22-94.90%
Output Stream 111-90.91% 122-83.33% 382-94.74%
Bidir Stream 433-93.02% 465-89.13% 1405-96.43%

Lines of code

DRPC is proud to get as much done in as few lines of code as possible. It's the author's belief that this is only possible by having a clean, strong architecture and that it reduces the chances for bugs to exist (most studies show a linear corellation with number of bugs and lines of code). This table helps keep the library honest, and it would be nice if more libraries considered this.

Package Lines
storj.io/drpc/drpcstream 486
storj.io/drpc/drpchttp 478
storj.io/drpc/cmd/protoc-gen-go-drpc 428
storj.io/drpc/drpcmanager 376
storj.io/drpc/drpcwire 363
storj.io/drpc/drpcpool 279
storj.io/drpc/drpcmigrate 239
storj.io/drpc/drpcserver 164
storj.io/drpc/drpcconn 134
storj.io/drpc/drpcsignal 133
storj.io/drpc/drpcmetadata 115
storj.io/drpc/drpcmux 95
storj.io/drpc/drpccache 54
storj.io/drpc 47
storj.io/drpc/drpctest 45
storj.io/drpc/drpcerr 42
storj.io/drpc/drpcctx 41
storj.io/drpc/internal/drpcopts 30
storj.io/drpc/drpcstats 25
storj.io/drpc/drpcdebug 22
storj.io/drpc/drpcenc 15
Total 3611

drpc's People

Contributors

ammario avatar egonelbre avatar elara6331 avatar elek avatar erikvv avatar ethanadams avatar ifraixedes avatar jtolio avatar kylecarbs avatar maxtruxa avatar ronen avatar stefanbenten avatar vinozzz avatar zeebo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

drpc's Issues

WASM support?

Is it in the cards to have a web socket-based transport for use with WASM projects?

maximum number of connections

We've started rolling out drpc into production, and so far its been great! However, one issue we've run into is what appears to be memory leaks in the server. After looking at the memory profiles for a bit we see a bunch of memory being held by drpcwire reader ReadPacketUsing. My conjecture is that there are one of these per connection on the server, and each is holding a buffer which grows to the maximum message size that has ever been read over that connection.

Is there a way to limit the maximum number of connections a server will allow? I see the InactivityTimeout is configurable, but no parameter for max connections.

Data race during Stream.MsgRecv vs drpcmanager.NewWithOptions

Hi,

I'm seeing the following data race (traces slightly redacted):

==================
WARNING: DATA RACE
Write at 0x00c000f20000 by goroutine 121347:
  runtime.slicecopy()
      /home/user/sdk/go/src/runtime/slice.go:310 +0x0
  storj.io/drpc/drpcwire.(*Reader).ReadPacketUsing()
      /home/user/go/pkg/mod/storj.io/[email protected]/drpcwire/reader.go:143 +0x528
  storj.io/drpc/drpcmanager.(*Manager).manageReader()
      /home/user/go/pkg/mod/storj.io/[email protected]/drpcmanager/manager.go:228 +0x184
  storj.io/drpc/drpcmanager.NewWithOptions.func1()
      /home/user/go/pkg/mod/storj.io/[email protected]/drpcmanager/manager.go:115 +0x39

Previous read at 0x00c000f20000 by goroutine 128191:
  google.golang.org/protobuf/internal/impl.(*MessageInfo).unmarshalPointer()
      /home/user/go/pkg/mod/google.golang.org/[email protected]/internal/impl/decode.go:104 +0x177
  google.golang.org/protobuf/internal/impl.(*MessageInfo).unmarshal()
      /home/user/go/pkg/mod/google.golang.org/[email protected]/internal/impl/decode.go:66 +0xdb
  google.golang.org/protobuf/internal/impl.(*MessageInfo).unmarshal-fm()
      <autogenerated>:1 +0xce
  google.golang.org/protobuf/proto.UnmarshalOptions.unmarshal()
      /home/user/go/pkg/mod/google.golang.org/[email protected]/proto/decode.go:105 +0x2f1
  google.golang.org/protobuf/proto.Unmarshal()
      /home/user/go/pkg/mod/google.golang.org/[email protected]/proto/decode.go:55 +0xc8
  gitlab.com/redacted/redacted/pkg/cache/pb.drpcEncoding_File_grpc_proto.Unmarshal()
      /home/user/git/redacted/pkg/cache/pb/grpc_drpc.pb.go:27 +0x91
  gitlab.com/redacted/redacted/pkg/cache/pb.(*drpcEncoding_File_grpc_proto).Unmarshal()
      <autogenerated>:1 +0x2e
  storj.io/drpc/drpcstream.(*Stream).MsgRecv()
      /home/user/go/pkg/mod/storj.io/[email protected]/drpcstream/stream.go:444 +0x8f
  storj.io/drpc/drpcpool.(*streamWrapper).MsgRecv()
      <autogenerated>:1 +0x81
  gitlab.com/redacted/redacted/pkg/cache/pb.(*drpcGrpc_ReadShardClient).Recv()
      /home/user/git/redacted/pkg/cache/pb/grpc_drpc.pb.go:102 +0x75
  ...

Goroutine 121347 (running) created at:
  storj.io/drpc/drpcmanager.NewWithOptions()
      /home/user/go/pkg/mod/storj.io/[email protected]/drpcmanager/manager.go:115 +0x7ae
  storj.io/drpc/drpcconn.NewWithOptions()
      /home/user/go/pkg/mod/storj.io/[email protected]/drpcconn/conn.go:46 +0x3a4
  storj.io/drpc/drpcconn.New()
      /home/user/go/pkg/mod/storj.io/[email protected]/drpcconn/conn.go:38 +0x347
  gitlab.com/redacted/redacted/pkg/cache/backends/drpc.(*Backend).newConn()
      /home/user/git/redacted/pkg/cache/backends/drpc/backend.go:345 +0xce
  ...

Goroutine 128191 (running) created at:
  gitlab.com/redacted/redacted/pkg/cache.(*Storage).readObject()
      /home/user/git/redacted/pkg/cache/storage.go:198 +0x2fd
==================

The binary is built with go install -race -gcflags=-l against commit
220d855. I have seen this data race with older versions, just never got around to report it.

The data race appears to be triggered when new connections are created while MsgRecv is running.

The data race is observed in the same system as in #37 where there are frequent context cancellations during read operations. If I pass in the background context instead, the data races go away.

Unfortunately I don't have a minimal code example that reproduces it. I could look into to providing that if it's not obvious what the problem is by just eyeing the traces above.

Thanks,
Tommy

debug logging

The debug logging is pretty inflexible. It would be nice to be able to override the logging function, but at a minimum using log.Output vs creating a new standard logger would at least enable me to capture the data into our standard logging vs redirecting stdout.

picobuf compatibility with regards to Marshal and Unmarshal.

At the moment picobuf.Marshal and picobuf.Unmarshal expect messages of type picobuf.Message, which has some specific methods. However, drpc endpoints create typedefinitions such as:

type drpcEncoding_File_certificate_proto struct{}

func (drpcEncoding_File_certificate_proto) Marshal(msg drpc.Message) ([]byte, error) {
	return picobuf.Marshal(msg)
}

func (drpcEncoding_File_certificate_proto) Unmarshal(buf []byte, msg drpc.Message) error {
	return picobuf.Unmarshal(buf, msg)
}

func (c *drpcCertificatesClient) Sign(ctx context.Context, in *SigningRequest) (*SigningResponse, error) {
	out := new(SigningResponse)
	err := c.cc.Invoke(ctx, "/node.Certificates/Sign", drpcEncoding_File_certificate_proto{}, in, out)
	if err != nil {
		return nil, err
	}
	return out, nil
}

I would expect the Invoke to register with a more specific message such as picobuf.Message instead of relying on drpc.Message and expecting that Marshal and Unmarshal don't have additional constraints.

Detect end of stream on `drpcconn.Conn`

Right now no API exists to detect when an instance of drpcconn.Conn got disconnected without polling in some form. I could identify two ways to learn about a disconnect:

  • The next attempt to send something on the connection fails with "closed: end of stream".
  • There is Closed() on drpcconn.Conn which exposes the desired information, but has to be polled.

Something like a WaitForClosed() method which blocks until the connection is closed or - even better - a channel to wait on would be really useful.
The connection's Closed() just calls the connection manager's Closed(), which in turn checks the term signal. That signal appears to be exactly what I'm looking for.

I'd like to create a PR to resolve this issue. Would it be acceptable to (indirectly) expose the term signal's Wait() and/or Signal() methods on drpcmanager.Manager and drpconn.Conn (e.g. as WaitForClosed()/ClosedSignal())?

My use case:
I have long-running DRPC connections, on which requests are sent every so often. Most of the time the client just idles, waiting for an external event to occur. This idling process should be stopped as soon as the DRPC connection goes away.

logging addresses

I'd like to log address data on both the client & the server. Is that data currently accessible?

On the server side drpc.Stream doesn't seem to expose any method to do that.

On the client side I'm logging right now by wrapping a drpcconn.Conn, and at that point the server isn't known.

Tag stable releases

The most recent tag (v0.0.20) is 74 commits behind master. Is there a newer stable release, or is that the one that people should use?

buffered Writer causes data races with net.Pipe

// Flush forces a flush of any buffered data to the io.Writer. It is a no-op if
// there is no data in the buffer.
func (b *Writer) Flush() (err error) {
	b.mu.Lock()
	if len(b.buf) > 0 {
		_, err = b.w.Write(b.buf)
		b.buf = b.buf[:0]
		atomic.StoreUint32(&b.empty, 0)
	}
	b.mu.Unlock()
	return err
}

The underlying assumption with this code is that Write() synchronously copies the data out of the buffer, such that it is safe to truncate the buffer to keep using it. This is unfortunately not the case with net.Pipe, which does the copy on the goroutine that reads from the pipe, and is thus asynchronous.

We're hitting these issues internally, as we do some internal comms over net.Pipe such that those internal comms use the same code as real networked communications.

Would you be willing to accept a patch?

What does the "D" stand for?

I was going through the documentation and blog posts but couldn't find any answer about the question; what does the "D" mean in DRPC?

Thanks

Python support

Add support to generate python servers and clients.

If you use gRPC with python, I could use some help with answering questions like:

  • is async like, a thing for sockets? i haven't pythoned since twisted was relevant
  • probably other things as the implementation comes along

Context cancellation closes the connection

Hi,

I'm looking into replacing grpc with drpc in one of my projects and initial tests show some nice performance improvements!

Switching to drpc was fairly painless. The only snag was that context cancelling seemingly closes the connection. This behavior is different from grpc, see my tweaked drpc/grpc examples in tomyl@acb08bd.

Is this behavior intentional? I can't see that the documentation mentions it.

Thanks,
Tommy

Support for custom codec?

Is there any support for using other codec other than protobufs? How would you implement a server using a custom codec?

Concurrent RPCs

A common issue that everyone will hit when using DRPC is how to handle concurrent RPCs. We should have some answer for this.

How to get protoc-gen-go-drpc

Hi folks,

I'm a bit new to the proto and trying to replace the existing grpc with drpc in my project.

I look into the doc and it says "Place protoc-gen-go-drpc in your $PATH and have protoc generate your definitions", but find nowhere to get the binary protoc-gen-go-drpc. There is only a simple main.go and useless README under storj.io/drpc/cmd/protoc-gen-go-drpc.

May I know how to generate the drpc go code based on the proto I already have and where to get the required tool?

Appreciate for your help!

migration steps

The migration steps in the blog post are:

  1. Release and deploy new server code that understands both gRPC and DRPC concurrently. With DRPC, this was a breeze since all of our application code could be used identically with both, and our ListenMux allowed us to do both from the same server port.
  2. Once all the servers and Nodes were updated, release and deploy new clients that spoke DRPC instead of gRPC.
  3. Once all of the old clients were gone, we removed the gRPC code.

I was just experimenting with that, and all that works well. Except I realized that to use the migration the drpc dialing needs to be switched to send the header.

nc, err := drpcmigrate.DialWithHeader(ctx, "tcp", addr, drpcmigrate.DRPCHeader)

It seems like after grpc is gone ideally you'd also want to switch this back to a plain headerless dial, but won't doing that require taking down all servers and clients at the same time, as the headerless dial is incompatible with the listen multiplexer?

cleanup Encoding interface

https://pkg.go.dev/storj.io/drpc#Encoding

can you remove JSONMarshal/JSONUnmarshal methods from Encoding interface? Not all proto encoders support json, and also this is really need only in gateway. so i'm suggest to drop this methods and on gateway side switch based on content-type and marshal/unmarshal to/from json via some other interface

Document the protocol in explicit detail

Some things that could use explicit documentation if one were to write a second, compatible implementation:

A good test would be to have someone other than those who have already worked on the code try to implement it in another language using only the documentation.

Javascript and/or web support

A commonly requested language is javascript. I haven't used gRPC with javascript before, so I don't know much about use cases, so here's some questions:

  1. Is server support desired?
  2. Is bi-directional streaming support desired?
  3. Does gRPC currently do bi-directional streaming?
  4. Is it targeting being run in a browser or is that "web" which is distinct from "javascript" and/or "node"?
  5. There's an example that shows how to serve both http and raw sockets on the same port, but only handles unitary http requests with json. Would that be sufficient server side support?

This issue is meant to collect information about use cases people have and potential answers to the above questions.

avoid string allocation for each request

if you change generator and generate for each method const variable name like

const (
  accountAccontServiceCreate = "/account.AccountService/Create"
)

func (c *drpcAccountServiceClient) Create(ctx context.Context, in *AccountCreateReq) (*Account, error) {
        out := new(Account)
        err := c.cc.Invoke(ctx, accountAccontServiceCreate, drpcEncoding_File_account_proto{}, in, out)
        if err != nil {
                return nil, err
        }
        return out, nil
}

we completely can avoid new string allocation for each request and minimize garbage.
what you think ?

Better example docs

Maybe an example server/client execution with output, showing how it can be tested and OT data is printed out to the console?

What is the max message size handling approach?

grpc has grpc.MaxRecvMsgSize and grpc.MaxSendMsgSize which restrict the maximum amount of data which can be sent or received by a grpc client/server.

What is the strategy that drpc uses? I did some tests with sending large amounts of data with the defaults. 1024 * 1024 bytes works fine. 1024 * 1024 * 100 bytes causes a data overflow protocol error from the server.

Looking at the code I can see some various

type ReaderOptions struct {
	// MaximumBufferSize controls the maximum size of buffered
	// packet data.
	MaximumBufferSize int
}

and

// Options controls configuration settings for a stream.
type Options struct {
	// SplitSize controls the default size we split packets into frames.
	SplitSize int

	// ManualFlush controls if the stream will automatically flush after every
	// message send. Note that flushing is not part of the drpc.Stream
	// interface, so if you use this you must be ready to type assert and
	// call RawFlush dynamically.
	ManualFlush bool

	// MaximumBufferSize causes the Stream to drop any internal buffers that
	// are larger than this amount to control maximum memory usage at the
	// expense of more allocations. 0 is unlimited.
	MaximumBufferSize int

	// Internal contains options that are for internal use only.
	Internal drpcopts.Stream
}

However, it isn't really clear the strategy or what I should be setting. Can anyone help here?

Question: What happens if SSL certs expire?

Planning on using dRPC and this package: https://pkg.go.dev/go.bryk.io/pkg/net/drpc - and trying to spec out whether SSL certs are a good idea for our system.

My concern is what happens when an SSL cert expires, or a certificate is changed (because a cert is due to expire shortly), and the client has an active connection to the dRPC server.

Will the client's connection be killed automatically? Will the client retrieve the new SSL certificate automatically?

I am planning on using Let'sEncrypt autocert to automatically renew certs without requiring a server restart.

Could anyone give any advice around this? Thanks.

Incorrect JSON deser in HTTP example

The DRPC server in examples/drpc_and_http/server/main.go is returning JSON on a successful call from the HTTP client.

Example: {"cookie":{"type":"Chocolate"}}

examples/drpc_and_http/http_client/main.go should change the JSON deser to:
err = json.NewDecoder(resp.Body).Decode(&data.Response)

Not sure if the Status struct field is used at all on success.

graceful shutdown and handler timeout

i'm maintain a fork of go-micro ( github.com/unistack-org/micro ) and want to add drpc client/server support to it.
grpc does not have native ability to pass context to server and also does not allow adding handler timeouts easy (via context with timeout or something like this)
What you think about graceful shutdown of dprc handlers ? Do you plan to add option for server to specify handler timeout?

Reconnect client to server

Hello everyone, I ran into a reconnect problem, if the server crashes in a panic and recovers, the client via tcp, of course, cannot reach it, except for creating a new client, I did not find a way to solve the problem. Maybe someone has a ready-made reconnect practice in the background?

Is import path supported in drpc gen?

Hi guys,

I am trying to replace gRPC with dRPC in my testing project.

There are a few proto files in multiple directories and under the same directory "pkg", and some basic protos like "github.com.google/protobuf" under "vendor" directory. When I try to generate the pb.go files, it keeps returning the error:

protoc-gen-go-drpc: invalid Go import path "descriptor" for "google/protobuf/descriptor.proto"
The import path must contain at least one period ('.') or forward slash ('/') character.
See https://developers.google.com/protocol-buffers/docs/reference/go-generated#package for more information.

Here is my gen cmd:

for dir in ./pkg/abc/ ./pkg/efg; do \
  build/werror.sh /go_path/native/x86_64-pc-linux-gnu/protobuf/protoc
-Ipkg:./vendor/github.com:./vendor/github.com/gogo/protobuf:./vendor/github.com/gogo/protobuf/protobuf:./vendor/go.etcd.io:./vendor/github.com/grpc-ecosystem/grpc-gateway/third_party/googleapis 
--go-drpc_out=paths=source_relative,protolib=github.com/gogo/protobuf $dir/*.proto;

There indeed a file "./vendor/github.com/gogo/protobuf/protobuf/google/protobuf/descriptor.proto" with "option go_package = descriptor". But it's pulled automatically and already exists in my local repo for a long time. I doubt the option "-I" not work with drpc-gen, so it try to locate the basic gogo proto from a wrong location?

I've beed debugging for this issue for more than a week and look up all docs I could find but got no help.

Appreciate for any help!

Client load balancing

How I can change the client with resolver with list of sockets ?

Like original grpc.Dial(..., grpc.WithLoadBalancer(roundRobin))

http based drpc client

i'm try to create api gateway that support http rest, grpc and drpc on the same port

i'm create grpc server, drpc http handler, and plain http handler and use the in http2 server as handler.

plain http and grpc works fine, but drpc request does not goes to http2 server =(
does it possible to get this worked with native drpc clients ?

gsrv := grpc.NewServer(grpc.UnknownServiceHandler(h.ServeGRPC))
comboHandler := newComboMux(h, gsrv, drpchttp.New(h))
http2Server := &http2.Server{}
hs := &http.Server{Handler: h2c.NewHandler(comboHandler, http2Server)}
func (h *Handler) ServeDRPC(stream drpc.Stream, rpc string) error {
	ctx := stream.Context()
	logger.Infof(ctx, "drpc: %#+v", rpc)
	return nil
}

func (h *Handler) HandleRPC(stream drpc.Stream, rpc string) error {
	return h.ServeDRPC(stream, rpc)
}

func newComboMux(httph http.Handler, grpch http.Handler, drpch http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		if r.ProtoMajor == 2 {
			ct := r.Header.Get("content-type")
			switch {
			case strings.HasPrefix(ct, "application/grpc"):
				grpch.ServeHTTP(w, r)
				return
			case strings.HasPrefix(ct, "application/drpc"):
				drpch.ServeHTTP(w, r)
				return
			}
		}
		httph.ServeHTTP(w, r)
	})
}

Kubernetes deployment advice

Hey, I was wondering what the best way of deploying this as an exposed service to kubernetes would be.

I made a yaml file containing a deployment and a service exposing port 9001 and 1 replica.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-deployment
  labels:
    app: api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: censored 
          imagePullPolicy: Always
          ports:
            - containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  selector:
    app: api
  ports:
    - protocol: TCP
      port: 9001
      targetPort: 9001

I also made an Ingress that selects my Service from above, note I added the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC"

kind: Ingress
metadata:
  name: api-grpc-ingress
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
  ingressClassName: nginx
  rules:
    - host: "api.domain.com"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api-service
                port:
                  number: 9001

After this I pointed a subdomain to the IP of the node that has the Ingress pod running on it, when I run a port scan on api.domain.com port 9001 is open

Yet whenever I try to run a client locally to contact that exposed service (api.domain.com:9001), no error occurs when setting up the client but every rpc fails:

    server_test.go:366: context canceled
    server_test.go:371: manager closed: EOF
    server_test.go:377: manager closed: EOF
    server_test.go:377: manager closed: EOF
    server_test.go:366: manager closed: EOF
    server_test.go:371: manager closed: EOF
    server_test.go:377: manager closed: EOF
    server_test.go:377: manager closed: EOF
    ...

Does anyone know how to get this to work?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.