Git Product home page Git Product logo

googleforgames / quilkin Goto Github PK

View Code? Open in Web Editor NEW
1.2K 1.2K 85.0 80.2 MB

Quilkin is a non-transparent UDP proxy specifically designed for use with large scale multiplayer dedicated game server deployments, to ensure security, access control, telemetry data, metrics and more.

License: Apache License 2.0

Rust 89.89% Dockerfile 0.50% Makefile 1.41% Shell 0.18% Go 0.48% C++ 7.21% C 0.06% C# 0.09% HCL 0.19%
dedicated-gameservers game-development multiplayer proxy rust

quilkin's Introduction

Quilkin logo

GitHub Crates.io Docs.rs GitHub release (latest SemVer) Discord Twitter Follow

Quilkin is a non-transparent UDP proxy specifically designed for use with large scale multiplayer dedicated game servers deployments that ensures security, access control, telemetry data, metrics and more without the end user having to custom build and integrate this functionality into their game clients and servers directly.

Announcements

Project State

Project is currently in alpha status, and is being actively developed. Expect things to break.

Not to be used in production systems.

Documentation

Releases

Development

This documentation is for the development version of Quilkin, currently active on the main branch. To view the documentation for a specific release, click on the appropriate release documentation link above.

Code of Conduct

Participation in this project comes under the Contributor Covenant Code of Conduct

Development and Contribution

Please read the contributing guide for directions on writing code and submitting Pull Requests.

Quilkin is in active development - we would love your help in shaping its future!

Community

There are lots of ways to engage with the Quilkin community:

Credits

Many concepts and architectural decisions were inspired by Envoy Proxy. Huge thanks to that team for the inspiration they provided with all their hard work.

Companies using Quilkin

Embark Studios

Licence

Apache 2.0

Quilly, the Quilkin mascot

quilkin's People

Contributors

baschtie avatar cheahjs avatar dependabot[bot] avatar fredagsfys avatar iffyio avatar jake-shadle avatar luna-duclos avatar markmandel avatar markus-wa avatar moppius avatar rezvaneh avatar thisisnotapril avatar xampprocky avatar zezhehh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quilkin's Issues

Filter idea: Asynchronous Filtering

Be able to have a filter that is processed Asynchronously.

This wouldn't have the ability to block, or route requests, but would allow for processing and parsing of packets not on the critical path.

We could also provide options to have a "sample-rate" in which you could retrieve a percentage of packets, either randomly, or deliberately sequentially, for systems that don't need to query every packet to process.

The main use case I can think of is doing something like deep introspection of packets for malicious elements - taking a random sample of packets for processing, could be fast enough to find a bad actor without them being able to cause large scale harm, without slowing down processing speed drastically.

Use a consistent pattern for error handling

Currently we use mixtures of tokio::Result and std::result::Result<_, std::io::Error> etc when returning errors which isn't ideal.
We want to pick a single way to handle errors in the codebase by e.g leveraging an existing library, handcrafting structs ourselves etc.

Possible refactor: Make Packet Filtering + Content Routing separate traits

It has come up in discussion that it might be optimal to have separate traits for Content Filtering/Modifications, and also for Content Routing.

Right now, they are a single Filter trait which can do both, but it could be split in two.

This ticket exists to keep track of this design choice, and see if it is applicable as we build out more usable filters and cover more use cases.

Question

Does Envoy/xDS have this as a concept already?

Context:

  1. Whether routing should be a separate API/config layer from filtering. On this, I'm not sure. On one side, I like how we have one interface for modification. This is nice from a perspective of utilising and writing Filters -- there's less to learn, and configuration seems like it should be simpler. I can also see cases (such at this one) wherein being able to manipulate packets AND route at the same time is very useful. That being said, having a separation of the Filters and Routers (wondering if Envoy has a name for this already) might lead to more flexible implementations that are nicely decoupled. My take for now -- let's write up a ticket with the question/sacrificial design of what it would look like with a split. As we get deeper into creating more filters, we can go back that ticket and see if it makes sense as we get more experience with the system. I'd hate to add complexity before we really feel we need it, or go down a path of premature optimisation. How does that sound?

Originally posted by @markmandel in #98 (comment)

Filter Idea: DTLS

Just noticed that godot implemented DTLS into their UDP stack:
https://godotengine.org/article/dtls-report-1
https://godotengine.org/article/enet-dtls-encryption

From initial review, DTLS gives us encryption, without the overarching of UDP reliability layer that Quic has.

That being said, I the best dtls libraries I've found are tied to openssl (I can't seem to find a pure rust implementation), which has portability concerns. But that was only a initial review of the ecosystem.

Resources:

https://users.rust-lang.org/t/announcing-udp-stream/35011 (tokio udp-stream lib with little documentation, but a dtls example which is pretty simple!)
https://github.com/simmons/tokio-dtls-example
https://github.com/ctz/rustls (dtls wip)
https://crates.io/search?q=dtls

Expose Metrics

(Placeholder for more detailed design)

We should expose some metrics!

I would recommend: https://github.com/open-telemetry/opentelemetry-rust as the library to use.

Since it's backend agnostic, can send metrics to a wide variety of backends.

These stats will likely run through both core Quilkin, but also through Filters as well.

Blocklist Filter

Thinking about Server side DDOS mitigation, or just general abuse mitigation - being able to block specific addresses from sending traffic through to any gameserver should be useful.

Something like:

version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.block.v1alpha1.Block
      config:
        addresses:
          - "194.56.36.79"
          - "3FFE:0000:0000:0001:0200:F8FF:FE75:50DF"
  endpoints:
    - name: server-1
      address: 127.0.0.1:7001

Question: Not sure if we need to block IP and port, or just IP?

Add health endpoint

#65 Includes a http server with endpoint for serving metrics. We want to reuse this server to include an endpoint for health checks

Implement a control plane

The XDS control planes out there today seem to be directed at proxying HTTP and TCP traffic so that it seems simpler to roll our own rather than attempt to adopt one of them to work for our use case.
go-control-plane seems to make writing an XDS control plane really easy thankfully - it handles running a GRPC server that speaks the XDS protocol with proxies and is backed by a cache which an implementation needs to populate.

The rough workflow would be that we have our code find out what gameserver/upstreamendpoints are available in the cluster and update the cache. Each quilkin proxy will watch resources by contacting go-control-plane's GRPC server which then feeds it data from the cache whenever it is updated.

Currently wondering what a simple workflow would be with e.g Agones + OpenMatch - say a loop that watches GameServers that have been Allocated and populates the cache with their addresses as Endpoints - and it sounds like one difference in this case is that we'll need to ensure that all connected proxies have acknowledged the update containing a GameServer's address before Open Match lets any game client talk to them otherwise a race condition can cause any initial packets from clients to be dropped since the proxies won't know about the new address.
So that this would need some kind of synchronization between the control plane and Open Match? (Say a CRD that will be updated by the control plane server and watched by the director?)

Would this make sense? Is this missing something? Thoughts?

UDP Encryption

This is a placeholder issue to put links for various libraries, ideas and research into encryption for UDP.

FilterChain for each Session

Context: #69 (comment)

If you need to keep state per-session in a filter, let's create and store individual FilterChain for each Session.

We're almost all of the way there - https://github.com/googleforgames/quilkin/blob/master/src/server/sessions.rs#L41 - we can likely drop the Arc requirement from the constructor and ensure the session gets it's own owned FilterChain.

Question

Can we think of a Filter than might need to retain global state, rather than per-session, and if so, how would we manage that?

Avoid copying endpoints list for every packet

Currently, we make a copy of the list of endpoints for every packet that flows through the system
https://github.com/googleforgames/quilkin/blob/master/src/proxy/server.rs#L155

this isn't ideal since the copies can be expensive and the list sizes can be significant.
We already wrap the list within a smart pointer like an Arc, and clone that instead of cloning the list and expose the wrapper to the rest of the code and filter chain - so that the overhead becomes a counter increment vs of a list copy.
e.g

pub struct Endpoints(Arc<Vec<EndPoint>>);

and pass this type in DownstreamContext as endpoints.

Is there a benefit to proxy_mode?

Starting context

Let's question this original design decision.

The original idea behind proxy_mode was that Filters would often have paired logic, one for client side proxying and one for server side proxying. So it would be expected for logic to change depending on the proxy mode, to match what was happening on the other side.

To provide a concrete example, here is what a Filter than would do compression/decompress would look like:

version: v1alpha1
proxy:
  mode: CLIENT
static:
  filters:
    - name: quilkin.extensions.filters.compression.v1alpha1.Compression
      config:
          mode: SNAPPY
  endpoints:
    - name: server-1
      address: 127.0.0.1:7001

Data going to the Client Proxy would be compressed when sent up to the proxy, and expect compressed data to come back from a Server Proxy on the way back down.

Conversely, a proxy in proxy_mode: SERVER will expect incoming data from a client to it's local port to be compressed, and will send decompressed data back through to the Server.

The whole idea being it would be easier to reconcile if something was a "Client" or a "Server" because it's written in the actual configuration.

The flip side to this, is if we throw away the whole notion of Client/Server proxies as first class citizens - and instead just have Proxies, with explicit Filter configurations. So the Client Proxy configuration as above would instead be something like:

version: v1alpha1
static:
  filters:
    - name: quilkin.extensions.filters.compression.v1alpha1.Compression
      config:
          mode: SNAPPY
          receive:
            local: compress
            endpoints: decompress
  endpoints:
    - name: server-1
      address: 127.0.0.1:7001

To provide the same logic, and instead be explicit at the Filter level what is going to happen when receiving data in both directions and what we should do about it. (not sure about the naming of the config elements, but hopefully you get where I'm going with this).

This is indeed more flexible a setup, but also requires some more verbose configuration from the end user.

So - what do we think? As we're writing more filters, do we feel more comfortable with a more flexible design (it seems that way, but good to double check these things), or something that's a bit more opinionated?

Filter Idea: Rate limiting

Since a game server / client usually has a known frequency for sending packets, having a rate limiting filter seems like an easy win for protecting game servers.

This would likely be more relevant on the receiver (game server) side, but may have applications on the client (sender) side.

Rules and Actions based on Filter events

Answering the question - what should happen if one or more packets are invalid for a filter?

For example, if we had a rate limiting filter, and we get more packets from an IP and port, how do we notify something that that player may need to be blocked/banned?

Some ideas:

  • Fire a webhook
  • Fire an embedded script like in #13

Continuous Integration

At some point we will probably want to have some kind of continuous integration.

I probably lean toward Cloud Build - mainly because it's a nice secure loop to push any potential Docker images into Google Container Registry, but there are various pros and cons we should consider, such as if we need to run tests on Windows.

[Client] LB across multiple endpoints?

Should a Client proxy be able to send packets to multiple Server proxy endpoints, probably in some sort of load balancer way - such as round robin, or random manner?

This provides another layer of redundancy in case a singular Server proxy goes down, and it takes time to realise and move to a new one. At least this way, some traffic is going through -- but seemingly at a slightly lower latency.

Maybe a configuration like:

local:
  port: 7000 # the port to receive traffic to locally
client:
  connection_id: 1x7ijy6 # the connection string to attach to the traffic
  lb_policy: ROUND_ROBIN # load balance policy. Round Robin / Random / ???
  endpoints:
    -  127.0.0.1:7001 # the address to send traffic to
    -  127.0.0.1:7002
    -  127.0.0.1:7003

Consolidate Filter Trait to two functions

Objective

Make the Filter trait simpler, to make it easier to understand how to write Filter functionality, and reduce confusion around which filter function should be used for what abilities.

Background

Requirements and Scale

Requirement would be to simplify the filtering surface area, without removing any potential filter functionality / filter applicability.

Design Ideas

Proposed solution is to consolidate the Filter into an implementation that only has two functions:

  1. That covers data coming into local receiving port, and going out through any/all of the endpoints.
  2. That covers data coming into an endpoint, and then goes back out through the local port.

The only signature change that actually needs to change is endpoint_receive_and_send_to_local such that we needed to ensure the from and to are in the signature. This has the additional benefit of being able to create filters for endpoint_receive_and_send_to_local that can look at both the from and to addresses, which wasn't possible before.

/// Filter is a trait for routing and manipulating packets.
pub trait Filter: Send + Sync {
    /// local_receive_and_send_to_endpoint filters packets received from the local port, and potentially sends them
    /// to configured endpoints.
    /// This function should return the array of endpoints that the packet should be sent to,
    /// and the packet that should be sent (which may be manipulated) as well.
    /// If the packet should be rejected, return None.
    fn local_receive_and_send_to_endpoint (
        &self,
        endpoints: &Vec<EndPoint>,
        from: SocketAddr,
        contents: Vec<u8>,
    ) -> Option<(Vec<EndPoint>, Vec<u8>)>;

    /// endpoint_receive_and_send_to_local filters packets received from `from`, to a given endpoint,
    /// that are going back to the original sender.
    /// This function should return the packet to be sent (which may be manipulated).
    /// If the packet should be rejected, return None.
    fn endpoint_receive_and_send_to_local (
        &self,
        endpoint: &EndPoint,
        from: SocketAddr,
        to: SocketAddr,
        contents: Vec<u8>,
    ) -> Option<Vec<u8>>;
}

Questions:

  • Not sure if I like the filter names local_receive_and_send_to_endpoint and endpoint_receive_and_send_to_local, but I feel like they are clear on what is happening at each point. If anyone has alternatives, would love to hear them.
  • Can we think of any types of Filters that can't be written with this API surface? (@markmandel I can't think of any, but wanted to ask the question)

Alternatives Considered

  • Leave this as is. But we're seeing some confusion here on which filter function could be used for what when implementing features.

Version configuration

We should have a version for the configuration files.

Probably something akin to K8s, with a apiVersion: v1alpha1 for the first line

Don't think we need a grouping like apps/v1 like K8s does though.

[Alpha Release] Let's write some documentation

We should write some docs.

  • Development guide
  • Configuration
  • Quickstarts
    • ncat
    • Agones & Agones w/ compression
  • Writing Filters
  • Individual Filters
  • UDP Session Management
  • List available metrics
  • FAQ

Add helper to manage test resources

#81 (comment)

We would like to come up with a better way to especially cleanup after tests.

Currently tests that create resources at runtime need to remember to explicitly shut them down otherwise they might not get cleaned up until the test process exits.
This isn't ideal since its error prone and adds some noise to tests in the form of clean up code.

One solution might be to have a helper object that tests go through to create resources and the helper tracks internally, any created resource that needs to be cleaned up and does so at the end of the test. This way test cleanup code is in one place.
For example, from a test's pov:

    #[test]
    fn my_test() -> {
        let t = TestHelper::new(config...); // Can be called whatever :P
        let (rx, tx) = t.run_proxy();
        let socket = t.ephemeral_socket();
        let echo = t.echo_server();
        // test logic involving rx, tx, socket etc....
        // done, no cleanup code required.
    }

And internally test helper does something like:

impl TestHelper {
   fn run_proxy(&mut self) {
      // Save away shutdown fn instead of returning it to test
      self.proxy_shutdown.push(self.create_proxy_and_return_shutdown_fn());
   }
}
impl Drop for TestHelper {
  // When the test is done, helper goes out of scope so we do any cleanup work.
  // This way we don't have to remember to cleanup.
  fn drop(&mut self) {
     for shutdown in self.proxy_shutdown {
         shutdown();
     }
  }
}

Filter idea: Simple routing with every packet having the connection id appended

This design is not meant to necessarily be used for production workloads, but is a simple design for testing and POC work and also giving us a simple filter to get started with implementation wise that covers both a sender and receiver.

UDP Structure

The connection_id is on every packet, at the end, so as to not change data positioning.

[ game data ][ connection_id ]

Sender implementation

The Sender will append the connection_id to the end of every packet it receives, and forward that to the receiver proxies.

Configuration options:

  • connection_id: the connection_id to attach to the packet.

Receiver implementation

Configuration options:

  • length: the connection_id length at the end of the packet.
  • strip: whether to remove the connection_id before passing the packet on. Defaults to true.

Logic

The Receiver will introspect the packet for the connection_id, and query the endpoints to see which ones contain that connection_id. The packet is then routed to only the endpoints that retain that connection_id

Integration Tests

We should have some kind of integration / e2e tests. Not sure how we want to do this, but it makes sense to have this at somepoint.

I wonder if we could do something like Docker compose to setup complex proxy and network scenarios? Just a thought.

Public chat room

As we might be garnering some extra community members soon, it seems like a good time to discuss having a (eventually) public chat room system for this project.

Seems like the team is strongly in favour of Discord (also, so is the community https://twitter.com/Neurotic/status/1320776366409699328).

Now it come down to an organisational issue: Do we have a specific discord for this project, or does it become part of some kind of larger discord?

Authentication

We should talk about authentication and figure out if we want to deal with auth at all, and if so, what we want to do.

This is auth as in, doing stuff like receiving a JWT token on initial connection

think: similar to how a streaming grpc call might have auth metadata as its initiated, which can then be validated

gRPC configuration management control plane API

(Placeholder for further discussion).

We'll need some kind of API surface for dynamic configuration values. Most likely to provide the capabilities for people to create their own control planes.

The way Envoy does it is quite well tested, and we can likely lift many ideas from it.
https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol

(Not the only API Envoy has, may also be worth looking at the whole thing: https://www.envoyproxy.io/docs/envoy/latest/api/api)

Make metric port configurable

Just ran into this when trying to run a local demo - couldn't spin up two instances of quilkin locally, as both were trying to bind to 9091.

Would be handy to make this configurable in the yaml.

Avoid need to clone metadata keys

Currently keys in dynamic metadata are Strings which requires an owned copy be provided to store a value.
This currently forces the user to clone the same string each time a key is added to a metadata which can be unnecessarily expensive when done for every packet.

Since keys are immutable we shouldn't need to have to make a new copy each time and should be able to reuse the same value by updating the metadata key type to an Arc<String> instead e.g pub metadata: HashMap<Arc<String>, Box<dyn Any + Send>>

Cross Compilation

Will need to build Linux, Windows and Mac binaries - at the very least.

Ideally having this baked into CI (#2) would be great.

Add default method impl to Filter trait

Currently, a filter trait impl needs to override both trait methods - however a lot of filters would only require custom behavior for only one method while the other has the default pass through behavior.

It would be nice to let imps only provide their custom logic rather than always additionally duplicate the default behavior

  • Add the default method implementations to the trait that pass through the received context
  • Remove any Filter impl that previously explicitly included the default behavior

Filter idea: QUIC based filter

(Mostly placeholder for further investigation)

Secure UDP communcation provided on both the sender and receiver side, using QUIC for packet sending and receiving (but not the HTTP aspects).

https://github.com/cloudflare/quiche looks promising.

There are concerns about QUIC having it's own reliable UDP implementation, which is not something we want. We should investigate and see if that in quiche / can we turn it off / other options.

Refactor UpstreamEndpoints retain to return enum

pub fn retain<F>(&mut self, predicate: F) -> Result<(), AllEndpointsRemovedError>

We want to change the function signature to return an enum rather than a Result so that it doesn't seem like something bad has happened when we catch an error.

Maybe something like:

enum RetainResult {
   Empty
   Ok
}

⬆️ Sacrificial draft. Change as necessary.

Then we can also add extra states if needed down the line.

FilterChain: Arguments should be a context object, and return a response object

Passing in a context and returning a result object means we can add extra fields to our Filter methods without breaking the type signatures down the line.

Something like this:

pub struct DownstreamContext {
    endpoints: [EndPoint],
    from: SocketAddr,
    contents: Vec<u8>,
}

pub struct UpstreamContext {
    endpoint: EndPoint,
    from: SocketAddr,
    to: SocketAddr,
    contents: Vec<u8>,
}

pub struct DownstreamResponse {
    endpoints: Vec<EndPoint>,
    contents: Vec<u8>,
}

pub struct UpstreamStreamResponse {
    contents: Vec<u8>,
}

/// Filter is a trait for routing and manipulating packets.
pub trait Filter: Send + Sync {
    /// on_downstream_receive filters packets received from the local port, and potentially sends them
    /// to configured endpoints.
    /// This function should return the array of endpoints that the packet should be sent to,
    /// and the packet that should be sent (which may be manipulated) as well.
    /// If the packet should be rejected, return None.
    fn on_downstream_receive(
        &self,
        ctx: &DownstreamContext
    ) -> Option<DownstreamResponse>;

    /// on_upstream_receive filters packets received from `from`, to a given endpoint,
    /// that are going back to the original sender.
    /// This function should return the packet to be sent (which may be manipulated).
    /// If the packet should be rejected, return None.
    fn on_upstream_receive(
        &self,
        ctx: &UpstreamContext
    ) -> Option<UpstreamstreamResponse>;
}

How do people feel about this?

Add clippy to ci

It would be nice to run clippy on commits to find mistakes and warnings. Cargo embeds it so it should be straightforward to include.
https://github.com/rust-lang/rust-clippy

  • Add clippy step to ci pipeline (commits should fail if clippy finds any warnings)
  • Fix pre-existing issues, if any, that clippy finds

Should sender/receiver be client/server as concepts?

Not sure if "sender" and "receiver" are good names as concepts.

Since the "sender" usually sits in front of a game client, and the "receiver" usually sits in front of a game server, we could rename this to being a "client proxy" rather than a "sender proxy" and a "server proxy" rather than a "receiver proxy"?

But the flip side is, sender/receiver do explain who starts the initial connection, so I'm not sure.

Throwing this out there for discussion. How do other people feel?

Performance Testing

We should have some kind of repeatable performance harness, ideally running on a regular basis, so we can see if have performance problems or regressions.

Filter Idea: Compression Filter

Some UDP data may benefit from compression before being sent - especially when considering public cloud ingress/egress costs.

Be nice to have some standard compression algorithms as a filter to filter data as it comes in and out.

Filter idea: Scriptable filter

(Placeholder, more to work out)

The idea being, we would integrate some kind of scripting langauge into Quilkin, such that if you needed custom filters, you could script them, rather than having to use Quilkin as a library and build your own binaries.

This might tie well into #12 - since scripted filters are expected to be slower than native rust (although worth experimentation).

Useful resources:

Refactor Client.lb_policy into it's own filter

  1. It sounds like we are in agreement that lb_policy should become a filter - which I think makes sense, and makes it easier for the end user to reconcile what is happening, as they only need to worry about the concept of "filters" not "filters" and "load balancers" and how they interact. If that's the case - let's write up a ticket, so we can get that work done.

Originally posted by @markmandel in #98 (comment)

To explain again - we should remove lb_policy from the Client proxy configuration, and RANDOM and ROUND_ROBIN should become a Filter that can be applied for this functionality (Broadcast goes away, since that will be the default for both Client and Server implementations)

Question:
Should we have two filters, one for RANDOM and one for ROUND_ROBIN? Or should we have a LoadBalancer filter, with a "policy" configuration option, that takes one of these values as the config option?

Set log level

Currently we do not explicitly set a log level and as a result we log debug messages to stdout today.
Ideally we would want to make the log level configurable via an endpoint on the admin server but for this ticket we can hard code the log level to INFO to avoid the proxy producing chatty logs.

We currently create the base logger in two places today - one for the actual proxy and the other one for tests.
Not sure what the difference is atm but if possible we can remove the one used in tests and have only one function for creating logger.
Also, the logger creating functions are currently done in an unrelated files (proxy/builder.rs)

  • Add a dedicated logger module under src/logger.rs and move the logger creating functions over to it
  • Hard code any created logger to log at level INFO
  • If feasible, remove the function entirely for creating loggers in tests, and reuse one function across the code base

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.