Git Product home page Git Product logo

faunadb-jvm's Introduction

FaunaDB JVM Drivers

Maven Central License

Warning

Fauna is decomissioning FQL v4 on June 30, 2025. See the v4 end of life (EOL) announcement and related FAQ.

This driver is not compatible with FQL v10, the latest version.

This repository contains the FaunaDB drivers for the JVM languages. Currently, Java and Scala clients are implemented.

Features

Documentation

Javadocs and Scaladocs are hosted on GitHub:

Details Documentation for each language:

Dependencies

Shared

Java

  • Java 11

Scala

  • Scala 2.11.x
  • Scala 2.12.x

Using the Driver

Java

Installation

Download from the Maven central repository:

faunadb-java/pom.xml:
<dependencies>
  ...
  <dependency>
    <groupId>com.faunadb</groupId>
    <artifactId>faunadb-java</artifactId>
    <version>4.5.0</version>
    <scope>compile</scope>
  </dependency>
  ...
</dependencies>
Basic Java Usage
import com.faunadb.client.FaunaClient;

import static com.faunadb.client.query.Language.*;

/**
 * This example connects to FaunaDB Cloud using the secret provided
 * and creates a new database named "my-first-database"
 */
public class Main {
    public static void main(String[] args) throws Exception {

        //Create an admin connection to FaunaDB.
        FaunaClient adminClient =
            FaunaClient.builder()
                .withSecret("put-your-key-secret-here")
                .build();

        adminClient.query(
            CreateDatabase(
                Obj("name", Value("my-first-database"))
            )
        ).get();
    }
}
Per query metrics

There are several metrics that returned per each request in response headers. Read this doc to have more info. You can access these metrics by:

import com.faunadb.client.FaunaClient;

import static com.faunadb.client.query.Language.*;

public class Main {
    public static void main(String[] args) throws Exception {

        //Create an admin connection to FaunaDB.
        FaunaClient adminClient =
            FaunaClient.builder()
                .withSecret("put-your-key-secret-here")
                .build();

        MetricsResponse metricsResponse = adminClient.queryWithMetrics(
            Paginate(Match(Index("spells_by_element"), Value("fire"))),
            Optional.empty()
    	).get();

        // the result of the query, as if you invoke 'adminClient.query' function instead
        Value value = metricsResponse.getValue();

        // gets the value of 'x-byte-read-ops' metric
        Optional<String> byteReadOps = metricsResponse.getMetric(MetricsResponse.Metrics.BYTE_READ_OPS);
        // gets the value of 'x-byte-write-ops' metric
        Optional<String> byteWriteOps = metricsResponse.getMetric(MetricsResponse.Metrics.BYTE_WRITE_OPS);

        // you can get other metrics in the same way,
        // all of them are exposed via MetricsResponse.Metrics enum
    }
}
Custom headers

There is an optional possibility to send custom headers for each http request. They can be defined in the client's constructor:

FaunaClient adminClient =
    FaunaClient.builder()
        .withSecret("put-your-key-secret-here")
        .withCustomHeaders(
            Map.of(
                "custom-header-1", "value-1",
                "custom-header-2", "value-2"
            )
        )
        .build();
Document Streaming

Fauna supports document streaming, where changes to a streamed document are pushed to all clients subscribing to that document.

The streaming API is built using the java.util.concurrent.Flow which enable users to establish flow-controlled components in which Publishers produce items consumed by one or more Subscribers, each managed by a Subscription.

The following example assumes that you have already created a FaunaClient.

In the example below, we are capturing the 4 first messages by manually binding a Subscriber.

// docRef is a reference to the document for which we want to stream updates.
// You can acquire a document reference with a query like the following, but it
// needs to work with the documents that you have.
// Value docRef = Ref(Collection("scoreboards"), "123")

Flow.Publisher<Value> valuePublisher = adminClient.stream(createdDoc).get();
CompletableFuture<List<Value>> capturedEvents = new CompletableFuture<>();

Flow.Subscriber<Value> valueSubscriber = new Flow.Subscriber<>() {
  Flow.Subscription subscription = null;
  ArrayList<Value> captured = new ArrayList<>();
  @Override
  public void onSubscribe(Flow.Subscription s) {
    subscription = s;
    subscription.request(1);
  }

  @Override
  public void onNext(Value v) {
    captured.add(v);
    if (captured.size() == 4) {
      capturedEvents.complete(captured);
      subscription.cancel();
    } else {
      subscription.request(1);
    }
  }

  @Override
  public void onError(Throwable throwable) {
     capturedEvents.completeExceptionally(throwable);
  }

  @Override
  public void onComplete() {
    capturedEvents.completeExceptionally(new IllegalStateException("not expecting the stream to complete"));
  }
};

// subscribe to publisher
valuePublisher.subscribe(valueSubscriber);

// blocking
List<Value> events = capturedEvents.get();

Detailed Java Documentation can be found here

Scala

Installation

faunadb-scala/sbt
libraryDependencies += ("com.faunadb" %% "faunadb-scala" % "4.5.0")
Basic Usage
import faunadb._
import faunadb.query._
import scala.concurrent._
import scala.concurrent.duration._

/**
  * This example connects to FaunaDB Cloud using the secret provided
  * and creates a new database named "my-first-database"
  */
object Main extends App {

  import ExecutionContext.Implicits._

  val client = FaunaClient(
    secret = "put-your-secret-here"
  )

  val result = client.query(
    CreateDatabase(
      Obj("name" -> "my-first-database")
    )
  )

  Await.result(result, Duration.Inf)
}
Per query metrics

There are several metrics that returned per each request in response headers. Read this doc to have more info. You can access these metrics by:

import faunadb._
import faunadb.query._
import scala.concurrent._
import scala.concurrent.duration._

/**
  * This example connects to FaunaDB Cloud using the secret provided
  * and creates a new database named "my-first-database"
  */
object Main extends App {

  import ExecutionContext.Implicits._

  val client = FaunaClient(
    secret = "put-your-secret-here"
  )

  val metricsResponse = client.queryWithMetrics(
    Paginate(Match(Index("spells_by_element"), Value("fire"))),
    None
  ).futureValue

  // the result of the query, as if you invoke 'client.query' function instead
  val value = metricsResponse.value

  // gets the value of 'x-byte-read-ops' metric
  val byteReadOps = metricsResponse.getMetric(Metrics.ByteReadOps)
  // gets the value of 'x-byte-write-ops' metric
  val byteWriteOps = metricsResponse.getMetric(Metrics.ByteWriteOps)

  // you can get other metrics in the same way,
  // all of them are exposed via Metrics enum}
Custom headers

There is an optional possibility to send custom headers for each http request. They can be defined in the client's constructor:

val client = FaunaClient(
  secret = "put-your-secret-here",
  customHeaders = scala.Predef.Map("custom-header-1" -> "value-1", "custom-header-2" -> "value-2")
)
Document Streaming

Fauna supports document streaming, where changes to a streamed document are pushed to all clients subscribing to that document.

The following sections provide examples for managing streams with Flow or Monix, and assume that you have already created a FaunaClient.

Flow subscriber

It is possible to use the java.util.concurrent.Flow API directly by binding a Subscriber manually.

In the example below, we are capturing the 4 first messages:

import faunadb._
import faunadb.query._

// docRef is a reference to the document for which we want to stream updates.
// You can acquire a document reference with a query like the following, but it
// needs to work with the documents that you have.
// val docRef = Ref(Collection("scoreboards"), "123")

client.stream(docRef).flatMap { publisher =>
  // Promise to hold the final state
  val capturedEventsP = Promise[List[Value]]

  // Our manual Subscriber
  val valueSubscriber = new Flow.Subscriber[Value] {
    var subscription: Flow.Subscription = null
    val captured = new ConcurrentLinkedQueue[Value]

    override def onSubscribe(s: Flow.Subscription): Unit = {
      subscription = s
      subscription.request(1)
    }

    override def onNext(v: Value): Unit = {
      captured.add(v)
      if (captured.size() == 4) {
        capturedEventsP.success(captured.iterator().asScala.toList)
        subscription.cancel()
      } else {
        subscription.request(1)
      }
    }

    override def onError(t: Throwable): Unit =
      capturedEventsP.failure(t)

    override def onComplete(): Unit =
      capturedEventsP.failure(new IllegalStateException("not expecting the stream to complete"))
  }
  // subscribe to publisher
  publisher.subscribe(valueSubscriber)
  // wait for Future completion
  capturedEventsP.future
}
Monix

The reactive-streams standard offers a strong interoperability in the streaming ecosystem.

We can replicate the previous example using the Monix streaming library.

import faunadb._
import faunadb.query._
import monix.execution.Scheduler
import monix.reactive.Observable
import org.reactivestreams.{FlowAdapters, Publisher}

// docRef is a reference to the document for which we want to stream updates.
// You can acquire a document reference with a query like the following, but it
// needs to work with the documents that you have.
// val docRef = Ref(Collection("scoreboards"), "123")

client.stream(docRef).flatMap { publisher =>
  val reactiveStreamsPublisher: Publisher[Value] = FlowAdapters.toPublisher(publisherValue)
  Observable.fromReactivePublisher(reactiveStreamsPublisher)
    .take(4) // 4 events
    .toListL
    .runToFuture(Scheduler.Implicits.global)
}

Building

The faunadb-jvm project is built using sbt:

To build and run tests against cloud, set the env variable FAUNA_ROOT_KEY to your admin key secret and run sbt test from the project directory.

Alternatively, tests can be run via a Docker container with FAUNA_ROOT_KEY="your-cloud-secret" make docker-test (an alternate Debian-based JDK image can be provided via RUNTIME_IMAGE).

To run tests against an enterprise cluster or developer instance, you will also need to set FAUNA_SCHEME (http or https), FAUNA_DOMAIN and FAUNA_PORT.

License

All projects in this repository are licensed under the Mozilla Public License

faunadb-jvm's People

Contributors

adambollen avatar agourlay avatar alvarofauna avatar ashfire908 avatar benjumanji avatar bitbckt avatar cleve-fauna avatar cynicaljoy avatar dijkstracula avatar eaceaser avatar erickpintor avatar fauna-arnaud avatar fauna-chase avatar faunaee avatar fireridlle avatar freels avatar henryfauna avatar jagedn avatar jfmiii avatar jrodewig avatar lregnier avatar macnealefauna avatar marrony avatar parkhomenko avatar retroryan avatar shestakovg avatar sprsquish avatar szegedi avatar trevor-fauna avatar vadimlf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

faunadb-jvm's Issues

Integtation with android

Hi can someone pls help me in integration with android native
is there any demo or example with native android integration of faunadb ?

please help me

Thanks in advance

NPE due to HTTP transport error

When a failure doesn't come with a JSON containing errors field, we get a NPE.

java.lang.NullPointerException: null
        at faunadb.FaunaClient.handleQueryErrors(FaunaClient.scala:150)
        at faunadb.FaunaClient.$anonfun$query$2(FaunaClient.scala:116)
        at scala.util.Success.$anonfun$map$1(Try.scala:251)
        at scala.util.Success.map(Try.scala:209)
        at scala.concurrent.Future.$anonfun$map$1(Future.scala:287)
        at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
        at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
        at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:140)
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
        at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
        at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)

NPE raised when calling asArray on a value node that contains a null

"cause":{"type":"java.lang.NullPointerException","message":null,"stack":{"00":"com.google.common.base.Preconditions.checkNotNull(Preconditions.java:212)","01":"com.google.common.collect.ImmutableCollection$ArrayBasedBuilder.add(ImmutableCollection.java:449)","02":"com.google.common.collect.ImmutableList$Builder.add(ImmutableList.java:650)","03":"com.google.common.collect.ImmutableList$Builder.add(ImmutableList.java:627)","04":"com.fasterxml.jackson.datatype.guava.deser.GuavaImmutableCollectionDeserializer._deserializeContents(GuavaImmutableCollectionDeserializer.java:46)","05":"com.fasterxml.jackson.datatype.guava.deser.GuavaImmutableCollectionDeserializer._deserializeContents(GuavaImmutableCollectionDeserializer.java:14)"

UnknownException loses http response context

error.status always throw an exception.

There is also a missing space when concatenating the status code in the error message resulting in messages such: "Unparsable service 413response".

Javadocs for query functions point to old/missing documentation

I'm getting started with the scala driver. I noticed that the links point to the old documentation pages. For example:

/**
* A Paginate expression.
*
* '''Reference''': [[https://app.fauna.com/documentation/reference/queryapi#read-functions]]
*/
def Paginate(

should probably point to https://docs.fauna.com/fauna/current/api/fql/functions/paginate?lang=scala

I'm familiar with the docs (got the cheat sheet on my toolbar, too), so I am good there. But the links were confusing me for a bit :). If some folks new to fauna are coming in with Scala in mind, that might throw them off more. ๐Ÿคท๐Ÿปโ€โ™‚๏ธ

It looks like each query function could point directly to it's respective docs page. That is something that I could help with. Just a tedious chore and no touching of business logic -- right up my alley for a Scala repo ๐Ÿ™‚ and assuming I understand what the intent is correctly.

NPE when creating an index

There is a unit test already failing when executed against the current core master.
It fails because java driver expects the index to be represented with a path but the db is returning a field instead.

Mapping result example.

Hi, could you provide me an example of mapping result from Fauna database in Scala, thanks!

val result2 = client.query(
    Paginate(Ref("databases"))
  ).map{
    res => println(res.to[String])
  }
  .recover {
   case NonFatal(t) => println(s"paginate database: ${t.getMessage}")
  }

res

VFail(List(FieldError(Expected String; found value ObjectV(Map(data -> ArrayV(Vector(RefV(my-first-database,Some(RefV(databases,None,None)),None))))) of type faunadb.values.ObjectV.,/)))

Codec.Record[T] breaks when case class contains a field named 'value'

Using fauna-scala 3.0.0

case class Type(value: String, count: Int)

object Type {
  implicit val codec: RecordCodec[Type] = Codec.Record[Type]
}

yields

[error] /home/agourlay/Workspace/allez-fauna/server/src/main/scala/agourlay/allez/api/Models.scala:54:55: type mismatch;
[error]  found   : String("count")
[error]  required: Int
[error]   implicit val codec: RecordCodec[Type] = Codec.Record[Type]
[error]                                                       ^
[error] one error found

whereas the following compiles

case class Type(v: String, count: Int)

object Type {
  implicit val codec: RecordCodec[Type] = Codec.Record[Type]
}

Fail to decode user define class in kotlin

I am trying to use faunadb-jvm library inside kotlin project. I am able to successfully push/save kotlin models to the server but issue arises when I try to decode them.

Example of my model in kotlin

class School : BaseFaunaModel {
    @FaunaField var name: String = UUID.randomUUID().toString()
    @FaunaField var ownerId: String = UUID.randomUUID().toString()

    constructor()

    @FaunaConstructor
    constructor(@FaunaField("name") name: String?, @FaunaField("ownerId") ownerId: String?) {
        this.name = name ?: "UNKNOWN"
        this.ownerId = ownerId ?: "UNKNOWN"
    }
}

I upload that model into the database which works fine and I can confirm the data is saved in the database:

client.query(Create(Collection("school", Obj("data", Value(schoolModel))))).get()

I try to fetch that same model and decode back to my "School" model as above but it ends up always being empty or giving an error stating it could not initialize into the School:

val result = client.query(Get(Ref(Collection("school"), "<school-Id>"))).get()
val schoolResult = result.at("data").to(School::class.java).optional
val school = school.get()

Note: I can verify that i am actually getting the data back when I attach the compiler to it and I can also verify that "data" path is the correct path to retrieve the data. It just has problem to convert the data back into it kotlin class. I can convert the data into map with no issue using the "toMap()" function and then manually parse out the fields but I was hoping Fauna Lib would save me that trouble based on the documentation done for java.

Using scala-steward for automatic dependency upgrade PR

I believe this repository would benefit from using scala-steward.

This bot is excellent at proposing PRs to upgrade the dependencies automatically which helps the project to stay healthy.

I personally have a good experience using it and it takes only 5 minutes to setup AFAIK.

WDYT?

Create entities using `Expr` is forbidden

Hi,

Currently a Codec.OBJECT only works with Value which extends from Expr.

But if I would like to create a entity like this, it seems to not possible??

Create(
  Collection("relationships"),
  {
    data: {
      followee: Select("ref", Get(Match(Index("people_by_name"), "Alice"))),
      follower: Select("ref", Get(Match(Index("people_by_name"), "Dave")))
    }
  }
)

This invalid, because Select is a Expr

Runtime error encountered while attempting to run documentation example

Issue summary

I used sbt to create the following project where I have attempted to run the documentation example. When running the example I got the following run time error:
java.io.FileNotFoundException: /tmp/workspace/drivers/jvm/Release/public-jvm/faunadb-scala/target/scala-2.12/scoverage-data/scoverage.measurements.74 (No such file or directory)

Note

I started with the latest version of sbt and then downgraded to an older version. I recorded my last attempt with the older version with which I got the same results.

Repro details below

Project layout looks like:

./build.sbt
./src/main/scala/example/Hello.scala

build.sbt:

ThisBuild / scalaVersion := "2.12.11"
ThisBuild / organization := "com.offgridcompute"

lazy val  hello = (project in file("."))
  .settings(
    name := "Hello",
    libraryDependencies += ("com.faunadb" %% "faunadb-scala" % "2.11.0"),
  )

src/main/scala/example/Hello.scala

package example
import faunadb._
import faunadb.query._
import scala.concurrent._
import scala.concurrent.duration._

object Hello extends App {
  println("hello")
  import ExecutionContext.Implicits._
  val client = FaunaClient(secret="####################################")
  val result = client.query(
    CreateDatabase(
      Obj("name" -> "my-first-database")
    )
  )
  Await.result(result, Duration.Inf)
}

Project compiles but then blows up at run time when creating the FaunaClient object. Here is my sbt session that reproduces the error:

sbt:Hello> sbtVersion
[info] 1.2.8
sbt:Hello> compile
[info] Updating ...
[info] Done updating.
[info] Compiling 1 Scala source to /home/daniel/Sources/foo-build/target/scala-2.12/classes ...
[info] Done compiling.
[success] Total time: 5 s, completed Apr 30, 2020 3:45:48 PM
sbt:Hello> run
[info] Packaging /home/daniel/Sources/foo-build/target/scala-2.12/hello_2.12-0.1.0-SNAPSHOT.jar ...
[info] Done packaging.
[info] Running Hello 
hello
[error] (run-main-0) java.io.FileNotFoundException: /tmp/workspace/drivers/jvm/Release/public-jvm/faunadb-scala/target/scala-2.12/scoverage-data/scoverage.measurements.74 (No such file or directory)
[error] java.io.FileNotFoundException: /tmp/workspace/drivers/jvm/Release/public-jvm/faunadb-scala/target/scala-2.12/scoverage-data/scoverage.measurements.74 (No such file or directory)
[error] 	at java.io.FileOutputStream.open0(Native Method)
[error] 	at java.io.FileOutputStream.open(FileOutputStream.java:270)
[error] 	at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
[error] 	at java.io.FileWriter.<init>(FileWriter.java:107)
[error] 	at scoverage.Invoker$.$anonfun$invoked$1(Invoker.scala:42)
[error] 	at scala.collection.concurrent.TrieMap.getOrElseUpdate(TrieMap.scala:897)
[error] 	at scoverage.Invoker$.invoked(Invoker.scala:42)
[error] 	at faunadb.FaunaClient$.apply(FaunaClient.scala:42)
[error] 	at Hello$.delayedEndpoint$Hello$1(Hello.scala:10)
[error] 	at Hello$delayedInit$body.apply(Hello.scala:6)
[error] 	at scala.Function0.apply$mcV$sp(Function0.scala:39)
[error] 	at scala.Function0.apply$mcV$sp$(Function0.scala:39)
[error] 	at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:17)
[error] 	at scala.App.$anonfun$main$1$adapted(App.scala:80)
[error] 	at scala.collection.immutable.List.foreach(List.scala:392)
[error] 	at scala.App.main(App.scala:80)
[error] 	at scala.App.main$(App.scala:78)
[error] 	at Hello$.main(Hello.scala:6)
[error] 	at Hello.main(Hello.scala)
[error] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[error] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[error] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[error] 	at java.lang.reflect.Method.invoke(Method.java:498)
[error] Nonzero exit code: 1
[error] (Compile / run) Nonzero exit code: 1
[error] Total time: 1 s, completed Apr 30, 2020 3:45:53 PM

Environment

Java:

openjdk version "1.8.0_212"
OpenJDK Runtime Environment (Zulu 8.38.0.13-CA-linux64) (build 1.8.0_212-b04)
OpenJDK 64-Bit Server VM (Zulu 8.38.0.13-CA-linux64) (build 25.212-b04, mixed mode)

OS:
Ubuntu 18.04.3 LTS

usage examples in the README

Can we please have the basics of:

  • import the library to your code
  • instantiate a client with a secret
  • run a basic query, any query, maybe even one that doesn't need a database

In each language, inline in the README? The idea is to have something to copy and paste that proves out your build chain.

  • Scala
  • Java
  • Android?

Thanks!

Expose billing metrics

Responses from Fauna include all sorts of extra headers:

  'x-byte-read-ops': '34',
  'x-byte-write-ops': '0',
  'x-compute-ops': '2',
  'x-faunadb-build': '20.11.00.rc8-01f9c94',
  'x-query-bytes-in': '120',
  'x-query-bytes-out': '4459',
  'x-query-time': '7',
  'x-read-ops': '27',
  'x-storage-bytes-read': '3047',
  'x-storage-bytes-write': '0',
  'x-txn-retries': '0',
  'x-txn-time': '1605653866258457',
  'x-write-ops': '0'

(see docs)

Of these, only x-txn-time is exposed by the JVM driver.

I can see a few ways in which those metrics could be exposed:

  • Like x-txn-time is (which is exposed through long FaunaClient.getLastTxnTime()).
  • By recording them onto the MetricRegistry which is already getting injected into FaunaClient.
  • By adding a method CompletableFuture<Pair<Value, Map<String, String>> queryWithHeaders(...) to FaunaClient which returns the response headers.
  • By making FaunaClient's constructor public so that one can decorate the Connection object to expose the metrics.
  • By enabling the injection of a ConnectionFactory into FaunaClient.Builder so that one can decorate the Connection object to expose the metrics.

I suppose that the second approach would the cleanest one.

Allow HTTP compression

Based on our investigation faundadb-jvm client does not use any compression by default:

request.headers().set(HttpHeaderNames.CONTENT_TYPE, "application/json; charset=utf-8");

Netty supports a bunch of compression codecs, including gzip and deflate - https://netty.io/4.1/api/io/netty/handler/codec/compression/package-summary.html

It would be a great addition to allow enabling compression codec on the client side.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.