Git Product home page Git Product logo

http2's People

Contributors

amekss avatar cachalots avatar dependabot[bot] avatar dgrr avatar eole868 avatar geronsv avatar juliens avatar kirilldanshin avatar ldez avatar liandeliang avatar pablolagos avatar vecpeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

http2's Issues

broken integration with fasthttp

Recent changes in fashttp broke the integration with http2

See: https://github.com/valyala/fasthttp/pull/1602/files

package main

import (
	"github.com/valyala/fasthttp"
	"github.com/dgrr/http2"
)

func main() {
    s := &fasthttp.Server{
        Handler: yourHandler,
        Name:    "HTTP2 test",
    }

    http2.ConfigureServer(s, http2.ServerConfig{})
    
    s.ListenAndServeTLS(...)
}

Results in:

/go/pkg/mod/github.com/dgrr/[email protected]/configure.go:70:16: cannot use cl.Do (value of type func(req *fasthttp.Request, res *fasthttp.Response) (err error)) as fasthttp.RoundTripper value in assignment: func(req *fasthttp.Request, res *fasthttp.Response) (err error) does not implement fasthttp.RoundTripper (missing method RoundTrip)

Streaming would be nice.

Streaming of data packets. Or support the fasthttp streaming feature.
Currently, if the packet is longer than 65535 the packets gets fragmented into multiple data packets without the EndFlag.

I think the mechanism should be similar for longer packets. Just look into it

Support PUSH_PROMISE?

I'd be nice if routers or framework implementations like gramework or fiber could choose to push some content before being asked to. Think about the possible implementation.

Cancel a request

HTTP/2 protocol gives room for cancelling a request. That is because in section 5.1, when a stream is in half-closed state, the client can send a Reset frame to cancel the request and put the stream in the Closed state.

WARNING: DATA RACE

==================
WARNING: DATA RACE
Read at 0x00c0002ea130 by goroutine 23:
  github.com/dgrr/http2.(*serverConn).sendPingAndSchedule()
      C:/Users/test/go/pkg/mod/github.com/dgrr/[email protected]/serverConn.go:979 +0x3c
  github.com/dgrr/http2.(*serverConn).sendPingAndSchedule-fm()
      <autogenerated>:1 +0x33

Previous write at 0x00c0002ea130 by goroutine 14:
  github.com/dgrr/http2.(*serverConn).writeLoop()
      C:/Users/test/go/pkg/mod/github.com/dgrr/[email protected]/serverConn.go:984 +0x128
  github.com/dgrr/http2.(*serverConn).Serve.func2()
      C:/Users/test/go/pkg/mod/github.com/dgrr/[email protected]/serverConn.go:114 +0x76

Goroutine 23 (running) created at:
  time.goFunc()
      D:/Language/Go/src/time/sleep.go:177 +0x44

Goroutine 14 (running) created at:
  github.com/dgrr/http2.(*serverConn).Serve()
      C:/Users/test/go/pkg/mod/github.com/dgrr/[email protected]/serverConn.go:108 +0x393
  github.com/dgrr/http2.(*Server).ServeConn()
      C:/Users/test/go/pkg/mod/github.com/dgrr/[email protected]/server.go:87 +0xe69
  github.com/dgrr/http2.(*Server).ServeConn-fm()
      <autogenerated>:1 +0x47
  github.com/valyala/fasthttp.(*Server).serveConn()
      C:/Users/test/go/pkg/mod/github.com/valyala/[email protected]/server.go:2135 +0x4e5
  github.com/valyala/fasthttp.(*Server).serveConn-fm()
      <autogenerated>:1 +0x47
  github.com/valyala/fasthttp.(*workerPool).workerFunc()
      C:/Users/test/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:224 +0xe1
  github.com/valyala/fasthttp.(*workerPool).getCh.func1()
      C:/Users/test/go/pkg/mod/github.com/valyala/[email protected]/workerpool.go:196 +0x4a
==================

how to enable http2

i tried the example code of client, but the resp.Header.Protocol() returns HTTP/1.1
could you help me use http2 client?

hc := &fasthttp.HostClient{
	Addr: "api.binance.com:443",
}

if err := fasthttp2.ConfigureClient(hc, fasthttp2.ClientOpts{}); err != nil {
	log.Printf("%s doesn't support http/2\n", hc.Addr)
}
req := fasthttp.AcquireRequest()
resp := fasthttp.AcquireResponse()
req.SetRequestURI("https://api.binance.com/api/v3/time")
req.Header.SetMethod("GET")
err := hc.Do(req, resp)
if err != nil {
	log.Fatalln(err)
}
fmt.Println(resp)
fmt.Printf("%d: %s, %s\n", resp.StatusCode(), resp.Body(), resp.Header.Protocol())

HPACK is not working correctly and I have no idea

HPACK is not working correctly. The tests works, but whenever you try to make more than 1 request with the client to any server you get a violation of the HPACK format.

Why the tests works? Because it should do. The logic looks like correctly implemented. The tests are not biased because they are extracted from the RFC examples (here).

The problem is around here. If the indexing changes, it works sometimes. But the tests doesn't.

WTF

ReadTimeout timer not resetting?

Keep-Alive connections get closed ReadTimeout seconds after connecting, even if they're actively being used to make HTTP requests:

image

I'm using a ReadTimeout of 5 seconds. As you can see, 5 and 10 seconds after the initial request, the browser has to connect again.

I believe this issue is basically the same as this one: golang/go#16450

Post with empty data hangs

Hey, thanks for your effort in making this. In order to help I'm trying to understand what's the state of this project in terms of compliance to the H2 spec. So I'm trying to run h2spec suite against simple server. Unfortunately it hangs during request initialization on this line for client and on this line for server. I might be wrong on this however it looks like that both server and client are waiting on TCP socket for data to come in.
While I was fiddling around I discovered that I can repeat this behaviour if I send a POST request with no data:

curl -v -k -XPOST "https://localhost:8443/"
*   Trying ::1...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 8443 failed: Connection refused
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: O=fasthttp test
*  start date: Feb 17 21:19:20 2021 GMT
*  expire date: Feb 17 21:19:20 2022 GMT
*  issuer: O=fasthttp test
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fc47980ba00)
> POST / HTTP/2
> Host: localhost:8443
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
^C <= ^^^ it hangs here so I had to kill the process with ctrl+c

For the record I also setup a simple HTTP/2 server from the stdlib for the reference and it mostly conform against h2spec (146 tests, 142 passed, 0 skipped, 4 failed)

Anyways I think h2spec could provide a good target and progress meter for this project. I'll try to figure out how to run it against this project, however if you have any clue where to look or how to fix that it would be great:)

Dispatch RequestCtx in it's own coroutine

The main problem this lib has is that the dispatching of handlers is not performed in different coroutines, thus, the handleStreams coroutine blocks entirely until the handler finishes, which is not desirable.

OMG! it's panic

panic: send on closed channel

goroutine 57799 [running]:
github.com/dgrr/http2.(*Ctx).resolve(0x40f3b6, {0xcc4820, 0xc00008a070})
/root/gopath/pkg/mod/github.com/dgrr/[email protected]/client.go:58 +0x36
github.com/dgrr/http2.(*Conn).writeLoop.func2.1({0x445d4f, 0xc000682f70}, {0xb1f200, 0xc000b03ce0})
/root/gopath/pkg/mod/github.com/dgrr/[email protected]/conn.go:414 +0x3d
sync.(*Map).Range(0xc000682f90, 0xc000682e78)
/root/go/src/sync/map.go:346 +0x2aa
github.com/dgrr/http2.(*Conn).writeLoop.func2()
/root/gopath/pkg/mod/github.com/dgrr/[email protected]/conn.go:412 +0x1bd
github.com/dgrr/http2.(*Conn).writeLoop(0xc001b98b40)
/root/gopath/pkg/mod/github.com/dgrr/[email protected]/conn.go:471 +0x407
created by github.com/dgrr/http2.(*Conn).Handshake
/root/gopath/pkg/mod/github.com/dgrr/[email protected]/conn.go:239 +0x7b

Flow control

Section

Looks like there's something called Flow Control. Which, if I understood correctly, allows an HTTP/2 endpoint to allocate resources efficiently, thus giving some feedback of the state to the other endpoint. That means: If I am a proxy server in HTTP/2 and I have 1 client, my window (my read buffer or available memory buffer) might be 104800 bytes. But, while the server keeps running and more peers connects to the server I (the proxy) need to allocate better the resources. Thanks to the WINDOW_UPDATE we can tell and endpoint that our window is, instead, 65535 bytes. So, the endpoint should only be able to send use frames no longer than 65535 bytes.

Or at least that's what I understood...

So far this looks ok, but the problem introduced by the f**king RFC is that every frame can have it's own window and the connection too. I'd like to make a Flow control per server instance, so we can easily balance the window of multiple connections, and then every connCtx should have it's own window, derived from the server, and the same thing for Streams. The Streams should get the window from the connCtx

Does not work using net.Listener with TLS

It seems as if using fasthttp2.ConfigureServer(server) does not work, even when the server is serving on a TLS listener. Am I doing something incorrect or is there no support for this?

	listener, err := net.Listen("tcp4", "0.0.0.0:443")
	if err != nil {
		panic(err)
	}

	tlsListener := tls.NewListener(listener, tlsConfig)

	server := &fasthttp.Server {
		Handler: ServeHTTP,
	}

	fasthttp2.ConfigureServer(server)

	go server.Serve(tlsListener)

Go modules support

I noticed that this project is not using go modules for dependency management. Any objections regarding it? If not I can create a PR to enable it

How to get rid of the channels while being thread-safe?

Yes, that's quite a challenge. I don't want to use channels, but how do you keep being thread-safe then?
The main problem is that you can't serialize 2 requests at the same time, because one might modify the table and the other should apply those changes...

I'll need to investigate how nghttp2 does those kind of things.

Does this http2 support proxy?

Hi:
This http2 code not support proxy, right? fasthttp support proxy, but i test this code, it do not support it, yes?

Client transport undefined

When I try the http2 client example, I have trouble with the Transport method.
My code:

...

func main() {
	c := &fasthttp.HostClient{
		Addr:  "api.binance.com:443",
		IsTLS: true,
	}
	if err := http2.ConfigureClient(c, http2.OptionEnableCompression); err != nil {
		panic(err)
	}

	count := int32(0)
	var wg sync.WaitGroup
	for i := 0; i < 20; i++ {
		for atomic.LoadInt32(&count) >= 4 {
			time.Sleep(time.Millisecond * 100)
		}

		wg.Add(1)
		atomic.AddInt32(&count, 1)
		go func() {
			defer wg.Done()
			defer atomic.AddInt32(&count, -1)

			req := fasthttp.AcquireRequest()
			res := fasthttp.AcquireResponse()

			req.Header.SetMethod("GET")
			// TODO: Use SetRequestURI
			req.URI().Update("https://api.binance.com/api/v3/exchangeInfo")

			err := c.Do(req, res)
			if err != nil {
				log.Fatalln(err)
			}

			body := res.Body()

			fmt.Printf("%d: %d\n", res.Header.StatusCode(), len(body))
			res.Header.VisitAll(func(k, v []byte) {
				fmt.Printf("%s: %s\n", k, v)
			})
			fmt.Println("------------------------")
		}()
	}

	wg.Wait()
}

The result:

$ go run main.go 

# github.com/dgrr/http2
../../../.gvm/pkgsets/go1.16/global/pkg/mod/github.com/dgrr/[email protected]/client.go:201:3: c.Transport undefined (type *fasthttp.HostClient has no field or method Transport)

WINDOW_UPDATE must be ignored on closed connection as stated by RFC 7540

I see a GO_AWAY with CLOSED_STREAM error sent to client when WINDOS_UPDATE is received right after server sent END_STREAM and closed stream locally. This is seen when using fasthttp2 client which sends WINDOW_UPDATE despite it should already know that server closed stream locally - client has local half-closed stream and just received frame with END_STREAM flag.

RFC says:
Endpoints MUST ignore [WINDOW_UPDATE] or [RST_STREAM] frames received in this state, though endpoints MAY choose to treat frames that arrive a significant time after sending END_STREAM as a connection error ([Section 5.4.1] of type [PROTOCOL_ERROR]

Easy fix is to add FrameWindowUpdate type as allowed frames as in below patch. Ideally, timestamp of close event might be tracked to make decision based on age of closed stream.

diff --git a/serverConn.go b/serverConn.go
index cd0e2ea..3df2c5f 100644
--- a/serverConn.go
+++ b/serverConn.go
@@ -374,7 +374,7 @@ loop:
                                }
 
                                if _, ok := closedStrms[fr.Stream()]; ok {
-                                       if fr.Type() != FramePriority {
+                                       if fr.Type() != FramePriority && fr.Type() != FrameWindowUpdate {
                                                sc.writeGoAway(fr.Stream(), StreamClosedError, "frame on closed stream")
                                        }

Thx,

Framing layer (with priorization, weight and dependency)

Section

I am not sure about this one. Implementing this would imply that:

  1. We have a coroutine per Stream (that would be ok, but we need to thing about overhead).
  2. Only one coroutine reads from the connection and sends the frames read to the particular Stream (if any).
  3. Communication using channels between Streams and main coroutine (the coroutine that reads from the conn).
  4. Another coroutine to write to the conn. Here we can just have 2 channels, one for reading and another one for writing, but it's ok.

My concern about this is: The code will look so ugly and it will just create more edge cases, more goroutine overhead, etc...
Also, how does priorization and dependency of streams work? WTF, I don't want to have a graph of Streams.

If someone has any idea, you are welcome to comment down here.
The only point I am thinking about I'd be ok to implement is a coroutine per Stream. Streams doesn't live that long... But using a goroutine pool can do the work. Also, it would speed up quite a lot. It's already fast and is currently working synchronously, think async!

panic: send on closed channel

I'm getting this panic pretty regularly (despite using recover middleware):

panic: send on closed channel
goroutine 4449 [running]:
github.com/dgrr/http2.(*serverConn).writePing(0xc000253340)
        github.com/dgrr/[email protected]/serverConn.go:148 +0xad
github.com/dgrr/http2.(*serverConn).sendPingAndSchedule(0xc000253340)
        github.com/dgrr/[email protected]/serverConn.go:940 +0x1e
created by time.goFunc
        time/sleep.go:180 +0x31

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.