Git Product home page Git Product logo

asaaki / capture-the-ip Goto Github PK

View Code? Open in Web Editor NEW
22.0 2.0 0.0 380 KB

Claim as many IPs as you can and become a block owner

Home Page: https://ipv4.quest/

License: Apache License 2.0

Dockerfile 2.29% Makefile 1.77% HTML 53.25% JavaScript 6.49% PLpgSQL 0.76% Rust 33.21% CSS 2.23%
axum capture-the-flag diesel diesel-rs game ipv4 rust rust-lang webapp api-server http-server mmo postgres postgresql api

capture-the-ip's Introduction

Capture The IP — ipv4.quest

This is an incredibly over-engineered Rust version of ipv4.games site, code.

The objective is to send requests to the site ipv4.quest from as many different IP addresses as possible.

If you claim and hold the majority of an #.0.0.0/8 address block, you get a point.

Read more about it here:
https://markentier.tech/posts/2022/12/capture-the-ip/

Technology

  • language: rust
  • web framework: axum (build on tower and hyper, runs on tokio)
  • datastore interface/orm: diesel - including some async flavours
  • datastore: postgres - powerful and versatile database
  • web hosting: fly.io - quirky but awesome app hosting
  • database hosting: neon.tech - free tech preview of serverless postgres

Design and architecture

Project structure

This project uses a cargo workspace and is divided into several crates for different purposes.

There are crates for the binaries/executables and library crates for the business logic of the project.

cti_server, cti_refresher, and cti_migrate are the binaries. The first one is the most important, it's the "game" server itself. The refresher is currently not used separately, the server does this job itself as a background thread for now. The last one as the name indicates is to help with database migrations; since the project uses diesel as its database interface and ORM, its up to the administrating person to decide with tool to use, the cti_migrate can run on a remote server though without any Rust tooling present.

The actual business logic for the server and refresher lives in cti_core, which itself also consumes some helper crates like cti_constants, cti_types, cti_env, cti_schema, and cti_assets. The helper crates mostly came to exist as the migration tool's logic is a bit different, but still needed some common definitions and functions.

Just because I can, cti_core is compiled as a shared library (a .dll in Windows, a .so in Linux, and theoretically a .dylib in Macos, but the last one is not an a platform I target) and then loaded by cti_server and cti_refresher, the migration tool uses slightly different logic and does not depend on cti_core at all. Interestingly the cti_core as a standalone library is much bigger than a binary which statically depends on it instead. I assume Rust can make some good optimizations when munching together some rlibs. Since this is still an exercise of over-engineering I take the size overhead for the added complexity … is that a loose-loose situation?

$ tree -d -L 1
.
├── cti_assets
├── cti_constants
├── cti_core
├── cti_env
├── cti_migrate
├── cti_refresher
├── cti_schema
├── cti_server
├── cti_types
├── frontend
├── migrations
└── tmp

The Server

Since axum is a pretty slim web application framework, the code is neither exciting nor controversial.

Early on—due to some data model decisions—the service includes an HTTP app as well as a background worker thread.

To provide a nice graceful shutdown functionality the crate tokio-graceful-shutdown is used to manage the different subsystems (HTTP server, background worker, a nice shutdown timer).

The background thread communicates via channels, so that the shutdown process is also graceful for itself; tokio's select! is a pretty useful tool here.

All the background thread does is to continuously update some materialized views in an interval and set a timestamp when the last run was.

Database

In total there are 3 tables, where one of them is due to diesel (keeping track of migrations). The other two are captures and timestamps. The latter is only to store a single timestamp for the refresh cycle, as I didn't see a need to involve another datastore like Redis or implement some distributed messaging system (which is probably required to really over-engineer this solution I guess).

The main table captures stores each claimed IP address match. To keep the storage needs in check only the last capture of an IP gets stored, so no history per IP is kept. Meaning: if you lost an IP to someone else, you disappear from the database (unless you have more IPs claimed, of course).

For various purposes there are a bunch of materialized views, which are views, but persisted like tables. They can be refreshed to get the most recent version of the query they represent. This approach was used as a caching layer on the database side. The data does not need to be realtime and the mentioned timestamp informs users about when the last refresh/update happened.

The queries are not too slow, but even a few hundred milliseconds are already too slow for me. The materialized views help to keep that low enough for now.

Last but not least these views keep some nasty SQL away from the app itself.

Frontend

It's very simple setup here. Almost all views are static and compiled into the final binary of the server.

The only dynamic view is the /claim endpoint, which sends a tiny HTML response with your IP and name included. That should make it usable outside of browsers, enabling you to verify everything worked without extra API calls.

The JavaScript code is vanilla, no fancy library or framework used. The index page makes a few API calls to retrieve some JSON data and renders it into the right places.

The only reason to leave it like that instead of over-engineering it is to provide a decent user experience. Any framework will ultimately add overhead/bloat which I don't want here.

One day I might add a secondary main page where I test fancy stuff like Wasm based views (maybe with Yew or whatever is the latest and greatest for such task).


Don't forget to visit ipv4.quest and claim your IP and block!

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

capture-the-ip's People

Contributors

asaaki avatar dependabot[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

capture-the-ip's Issues

Access-Control-Allow-Origin: *

Hello, I was wondering if it would be possible for you to add the HTTP header "Access-Control-Allow-Origin: *" to responses to the claim page. This would make it so sites with a Cross Origin Embedder Policy of require-corp are able to load the claim page. Thank you!

Don't trust X-Forwarded-For and related headers by default

X-Forwarded-For/X-Real-IP/CF-Connecting-IP/etc should not be used as a source of IPs if CTI is not actually behind a reverse proxy.
(Maybe it should be a config option of some sort, along with a way to configure which IPs such headers can be accepted from)

Example:
image
Someone seems to have abused this already:
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.