Git Product home page Git Product logo

rollupid's Introduction

Identity management for the private web

License Discord

Build Status

Platform

Apps

Packages

TODO

What is Rollup?

Rollup is authorization infrastructure for your apps so you can build great relationships with your users. For more information please checkout our website and docs.

Rollup Monorepo Tour

Let's take a look around at the Rollup Monorepo layout...

Platform

The platform/ directory is where all the core identity services are located. The Rollup platform is organized by "local-first" (or logically local) nodes (accounts, address, account, and more) organized in a graph by the Galaxy service.

Apps

The apps/ directory is where the presentation layer applications (or backend for frontends) live. These apps include the Profile user experience as well as the Developer Console app.

Packages

The packages/ directory contains our libraries and other share components.

Docs

The docs/ directory contains the developer documentation portal.

Develop

Configuration

Please use the following tools and versions when developing with this repository:

  • Node.js v17+

NIX ENV

Install NIX and run nix-build to install nix packages and nix-shell to execute a shell with a fully configured development environment.

Note that docker doesn't fully work using nix packages.

Developing

This monorepo is managed by Yarn workspaces and nested workspaces. You can run yarn commands (i.e., yarn dev) to run all the platform services and dependencies together. Applications require more resources so it is recommended to run them individually.

Running

This monorepo is managed by Yarn workspaces and nested workspaces. You can run yarn commands (e.g., yarn dev) to run all the platform services and dependencies together. Applications require more resources so it is recommended to run them individually.

Before getting started, please visit each project's README for more information on initial setup.

Please follow the steps below to get started:

  1. Install dependencies with yarn
  2. Setup local edges with cd platform/edges && yarn db:execute
  3. Run the platform with from the platfrom directory with yarn dev
  4. Run the apps with from the apps directory with yarn start

Contributing

We are happy to accept contributions of all sized. Feel free to submit a pull request.

Also checkout our contributing guidelines.

rollupid's People

Contributors

alfl avatar betimshahini avatar billkube avatar cosmin-parvulescu avatar cradoe avatar crimeminister avatar deankale avatar dependabot[bot] avatar drew-patel avatar jpetrich avatar maurerbot avatar omahs avatar poolsar42 avatar szkl avatar tangrammer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

rollupid's Issues

chore(sdk): building, packaging, publishing

  • sdk
    • associate @kubelt npm package scope with GitHub package registry
      • this is done via an .npmrc under the top-level sdk/ directory
      • intent is to publish private to GitHub and then eventually publicly to npm (and maybe GitHub)
  • @kubelt/ddt
    • publish package to GitHub
    • write package docs
    • build pipeline
  • @kubelt/sdk
    • publish npm package to GitHub (compiled to JavaScript)
    • publish maven package to GitHub (as CLJS library)
    • write package docs
    • build pipeline
    • set default remote address for IPFS client
    • set default remote address for p2p client (api.kubelt.com)
  • @kubelt/web
    • publish package to GitHub
    • write package docs
    • build pipeline
  • @kubelt/kbt
    • refer to published SDK (cljs) from shadow-cljs.edn the CLI now depends on the library directly
      • com.kubelt/sdk "0.0.2"
    • enable fetching of private resource
      • currently we can resolve the coordinate of the dependency
      • however, we are getting an authentication error
    • publish package to GitHub
    • write package docs
    • build pipeline
      __

notes

  • still need to decide on versioning schema, release git flow, tagging, etc.
  • need to figure out our preferred error handling strategy
    • e.g. pass in invalid multiaddress to init, returns error map? throws exception?

SDK > Upgrade Path

Fat links need to encode api version so we can upgrade content to new api's via the client.

chore(rdf): misc refinements to RDF/cljs for conformance with RDF/js

  • Our schema for RDF/cljs doesn't yet mirror this bit of verbiage from the RDF/js spec:

If the literal has a language, its datatype has the IRI "http://www.w3.org/1999/02/22-rdf-syntax-ns#langString". Otherwise, if no datatype is explicitly specified, the datatype has the IRI "http://www.w3.org/2001/XMLSchema#string".

  • our schema needs to allow the "subject" term of a quad to recursively allow a quad
    • this will require a re-jigging of the malli quad schema to create a "registry", which is what allows for recursive schemas

feat(sdk): inject IPFS client

Inject an IPFS client into the SDK instance that is appropriate for the execution context. If possible, wrap the client in one or more protocols that only expose the specific limited functionality that is required. The intent is avoid accidental binding on incidental functionality, and to reduce the work of supporting multiple execution contexts.

For example, the SDK needs to be able to read DAGs from IPFS as well as write them. Rather than injecting a client that exposes this capability along with many other features, allowing them to be used in an uncontrolled fashion, we might define a DAGIO protocol that exposes just the DAG reading/writing feature. An implementation of this protocol for use in a Node.js-based application might wrap a JavaScript HTTP client for IPFS, while an implementation targeting the browser might use a JavaScript message proxy client to talk to an IPFS instance running in a SharedWorker.

fix(ipfs): hasher encoding not working in browser

There is a dependency on node Buffer that prevents a test from executing successfully in the browser:

car.built_test

ERROR in (kw->hasher-test) (car/build_test.cljs:111:17) core.cljs:200:23
hash sha2-256 core.cljs:200:23
hasher encodes input correctly core.cljs:200:23
expected: (= sum (hash-encode hasher example-input)) core.cljs:200:23
  actual: #object[ReferenceError ReferenceError: Buffer is not defined]

SDK > auto-detect platform and operating system

Currently we require the user to specify the platform they're running on in order to correctly initialize the SDK. Instead, detect the platform (node/browser) and operating system (as well as any other pertinent platform details) so that the user doesn't need to supply this information. The capability to supply a :sys/platform option should remain for development purposes; it will override any auto-detected values.

SDK > Add technical standards

Why

We want to agree as a team which versions of which tools we're using, across which platforms.

What

  • Decide where docs.
  • Document Node / NPM

SDK > integrate into demo website

Inject the SDK into the demo website. This will serve as a sanity test and as a demonstration of how to use the SDK from a browser. Current plan is to attach the system map to window, e.g. window.kubelt when successfully initialized. This should make for easy detection from e.g. browser extensions.

EPIC > Customer Portal

Why

Customers need a portal to access their account with Kubelt.

How

A daap powered by our SDK, CMS, and gateway. This daap let's customers define and manage their applications. These applications will have options like adding CNAMES for direct gateway usage and IPNS convenience.

Billing options and usage data should also be made available.

What

  • #138
  • #139
  • Fetch customer portal user daap data.
  • Update gateway for billing tracking via CNAME (direct gateway) or pk+signature (API proxy)
  • Integrate with fiat or crypto payments provider.
  • Dashboard for usage metrics.

Dependencies

#3

Notes

  • By default the user logged in is admin.
  • With permissions like in #53, we can add sub accounts with permissions.

feat(sdk): convert BAG to CAR

Convert a BAG data structure to a CAR map such that:

  • the map has a roots collection of CIDs
  • the map has a blocks collection of IPLD blocks
  • nodes in the source BAG are encoded as blocks using the specified :ipld/codec
  • nodes in the source BAG are encoded as blocks using the specified :ipld/hash

EPIC > Sanity Plugin Alpha

Why

The CMS Daap is a web3 CMS in a box for non-technical users that handles the publishing and naming of content to Kubelt.

How

The alpha version of the CMS will be delivered as a Sanity Studio plugin that adds a login and workflow step to publish content to Kubelt.

Login

When the workflow is component is loaded (see below) a user should be prompted to login by providing entering a passphrase. Login is completed by requesting the users wallet to sign the collected passphrase and using the resulting signature as the seed for a new master key material using the Kubelt SDK.

NOTE: Using Sanity Studio's part system may be required? Look at the MUX plugin for example.

The SDK will use the master key material to bootstrap the app by loading the users "Me DAG" into the local quad store.

Workflow

This step grabs a the document being published and passes the draft to the Kubelt SDK to generate a schema and translate the document to RDF/Quads. The SDK will then pack the contents and publish to IPFS and return the corresponding CIDs. The root of this collection will also be named by the CID and published to Kubelt.

This published information will be returned to the plugin and the workflow will then patch the document with this information.

NOTE: Users are expected to run add their own schemas as they would currently do with Sanity.

TBD: How does naming work with this workflow step if there are multiple named items/collection?

What

  • #6
  • kubelt/sanity-plugin-kubelt#24
  • kubelt/sanity-plugin-kubelt#23
  • kubelt/sanity-plugin-kubelt#12
  • kubelt/sanity-plugin-kubelt#13
  • kubelt/sanity-plugin-kubelt#25
  • kubelt/sanity-plugin-kubelt#27
  • kubelt/sanity-plugin-kubelt#34
  • kubelt/sanity-plugin-kubelt#50
  • kubelt/sanity-plugin-kubelt#51
  • FEAT(CMS DAAP)> Prompt user to attribute other properties to content (e.g., protected/hidden via mutliauth, etc TBD).
  • FEAT(CMS DAAP)> Send the document with marked name field to the SDK
  • FEAT(CMS DAAP)> Take SDK response and patch the document with the kubelt name CID (plus any other metadata)
  • FEAT(CMS DAAP)> Complete the workflow

Dependencies

#3

Notes

fix dependency error: sec256k1 v1.5.1

required package @noble/secp256k1-1.5.1.tgz does not exist.

this version isn't available on npmjs.org, only 1.5.0 and 1.5.2 are listed. adding 1.5.2

CHORE > Clojure compilation failed on ```build:web:test```

Step 1 - cd ./sdk/web
Step 2 - bb run build:web:test

Result

[:web-test] Compiling ...
------ WARNING #1 -  -----------------------------------------------------------
 File: ~/.m2/repository/metosin/malli/0.8.0/malli-0.8.0.jar!/malli/core.cljc:2258:26
--------------------------------------------------------------------------------
2255 |          (reduce -register-var {}))))
2256 |
2257 | (defn class-schemas []
2258 |   {#?(:clj Pattern, :cljs js/RegExp) (-re-schema true)})
--------------------------------^-----------------------------------------------
 References to the global RegExp object prevents optimization of regular expressions.
--------------------------------------------------------------------------------
2259 |
2260 | (defn comparator-schemas []
2261 |   (->> {:> >, :>= >=, :< <, :<= <=, := =, :not= not=}
2262 |        (-vmap (fn [[k v]] [k (-simple-schema (fn [_ [child]]
--------------------------------------------------------------------------------
nil
Closure compilation failed with 1 errors
--- cljs_test_display/core.cljs:411
@define cljs_test_display.core.root_node_id has already been set at cljs_test_display/core.cljs:5:23.

Error while executing task: build:web:test

KUBELT > CHORE(Sanity): Remove kItem definition dependency

Why

As a product owner, I want the plugin users to have a hassle-free setup experience. As it currently stands, a developer would need to configure their current schema to attach our own needed properties. If we could append to schemas at runtime, this would effectively allow us to overcome user configuration.

What

  • Investigate possbility of removing user schema configuration
  • Weigh the pros and cons vs. kubelt/sanity-plugin-kubelt#48
  • Create FEAT

GHA > Go-Live

Why

This is the reason we're here: deliver cool web3 functionality to users.

What

This is not related to customer-specific deployment, this is go-live for the public.

Publish to GitHub Marketplace, drive traffic by advertising through marketing channels.

SDK > integrate with p2p system

The p2p naming system exposes a RESTful API. Integrate that API into the SDK:

  • add x-platform HTTP client support (Node.js, browser)
  • inject the URI of the API into the SDK and wrap the API calls that it exposes
    • /updatekbt
    • /kbt/:kbtid
    • (TODO?) /register
  • support JWT for securing API calls
    • this will require a local wallet for storing a keypair from which we derive JWT crypto material
    • #45

SDK > detect SDK from browser extension

Update browser extension to detect when the SDK has been initialized and is available in the local context. For now, this should do nothing more than "activate" the extension so that it can be interacted with by the user. Eventually, more specific capabilities will come online. Work out the means by which the SDK exposes its capabilities / version, etc. to the extension.

chore(rdf): missing tests

A missing test is as good as a bug?

  • rdf.data-factory-test/subject-test
  • rdf.data-factory-test/predicate-test
  • rdf.data-factory-test/object-test
  • rdf.data-factory-test/graph-test
  • test that quad subject can recursively contain another quad
  • test that quad literals conform to rules around datatype

SDK > sharing permissions

Rough notes:

61810711-3f27-442c-9ff6-a0ad8fb91321

thinking about how to deal with user permissions / access controls
in the gateway we mark multiauth and an owner
owner generates the name keys and envelopes them into a outbox for the second user
second user picks that up and links it to their collection of metadata so they can also reproduce the name.
gateway and hypercores respect that user 2 has write permissions using user 2's signature
if that is revoked they can reproduce the name but mutations will be rejected
technically, just the name and not the namekeys need to be shared

feat(sdk): local wallet implementation

In the browser we use a wallet to manage the key pair(s) that represent a user to Kubelt. For development, testing and eventual use within the production API we need a local wallet accessible when running in a node context. Ideally, this wallet is an implementation of a shared Wallet interface that is also implemented in the browser in a bid to improve the platform agnostic nature of the SDK.

EPIC > SDK Alpha

Why

The Kubelt SDK is an easy to use daap development library backed by Kubelt.

How

The SDK is a functional clojure script library that handles the following functionality.

NOTE: Dependencies like web3.js and IPFS must be made available in context and configured via DI

Identity

Cryptography relating to authenticating the user and materializing their "Me DAG".

NOTE: do we use wat sign in WASM?

Naming

Cryptography relating to naming content on the kubelt hypercore network.

NOTE: do we use the libsodium in WASM?

NOTE: IM3000 like collaboration out of scope for alpha.

What

  • An interface for resolving an identity into a session a bootstrapping a Kubelt Core
  • An interface for resolving names and appending names to the Kubelt Core
  • An interface for writing content and linking to the Kubelt Core

Features

Bugs

Meta

Notes

SDK > define the BAG abstraction

We define a BAG (Bundle of Acyclic Graphs) as the CLojure/Script data-centric abstraction of an IPFS CAR (Content Archive). A BAG is a collection of DAGs and can be directly transformed into a CAR.

CHORE > Storage service

Why

I would like to abstract away the storage implementation so that code is easier tested and also we can potentially use a library to make use of IndexDB and fall back to localStorage.

What

  • Create storageService
  • Use sessionStorage (?)
  • Encryption (?)

Notes

  • sessionStorage would be safer
  • If we use IndexDB or localStorage we might need to find a way to encrypt (signed shared secret with wallet?)
  • One advantage could be the mimicking of hypercore namespaces

FEAT > `json` CLI

Why

We want the CLI to be use case agnostic.

What

Rename the current car packing command to json.

EPIC > Peer Node Alpha

Why

The Peer Node is a peer-to-peer framework for daap development. Peers offer auth, storage, naming, and other services for daaps to consume via the Kubelt SDK.

How

The alpha version of the peer node will be a hypercore-libp2p powered peer that is responsible for content names + metadata storage and lookup.

Auth

Multiauth should be used to delegate access to content names and their meta data. For instance, a signature or smart contract to access and mutate metadata.

NOTE: name metadata TBD.

Naming

A libp2p stream handler endpoint that receives naming messages will authenticate the message and create/update an entry into the hypercore p2p KV.

NOTE: can this be with hypercore next autobase?

What

  • A hypercore powered kubelt node with naming service handler
  • Multiauth handling for name mutations
  • #85
  • #92
  • #89
  • #78
  • #79
  • #80
  • #81

Notes

SDK > add x-platform HTTP client support

We need to make HTTP calls from both Node.js (debug CLI, prod CLI, tests) and the browser. For example, we need to make HTTP requests to talk to the p2p naming system. Expose a single HTTP interface with implementations for each platform/environment.

chore(bzl): flesh out Bazel support

Use Bazel to drive a single command reproducible build of the entire repository contents. Additional tooling support will need to be authored to wrap any tooling not currently supported; place these tools in the top-level bzl/ directory.

DOCS > Backend API Logic

Adding this here, but should arguably live in wiki/notion.

Why

What does the backend (as opposed to the SDK/action) actually do to (fully) publish content?

What

Algos:

  1. Accept carfiles from the SDK. Or accept content directly and encode it (ie, action calls the add and dag apis proxied by the gateway).
  2. Publish those to Estuary, possibly one call per root if Estuary still doesn't accept multi-root CARs.
  3. Accept key material from SDK, use to generate, publish, and renew names.
  4. Call all publish gateways to cache the names and CIDs. Maybe keep calling until peers are fast.

Chore > Setup Kubelt Mono Repo

Setup the this repo with the following packages:

core

This is the core kubelt library that contains everything related to rdf/quads, IPLD+CAR, cryptography/identity, etc.

SDK

This is a wrapper around the core for loading into programming envrioments

CLI

This is a wrapper around the core for loading into terminal enviroments

peer

This is the core library for our p2p node that will support the daap node framework in the future.

SDK > inject p2p client

Inject a client for the p2p kubelt network into the SDK to provide access to the naming and other functionality it provides.

feat(sdk): convert RDF vocabulary to DAG

Kubelt data stored in RDF format (within IPLD DAGs ) s defined in terms of external vocabularies. A vocabulary is an RDF graph that describes a collection of entities and relationships, and serves to assign semantics to the the data being stored. We would like for data stored in Kubelt to be fully specified using vocabularies accessible via IPFS itself, ensuring that any bit of data can be "understood" without recourse to external resources that might go away, and building upon the nice property of content-addressed networks that that data vocabulary can be incorporated into the hash of the data itself, in effect locking the data to a specific iteration of a vocabulary.

Implement a ddt command:

$ ddt rdf vocab <example.ttl>

that converts an input Turtle (.ttl) file to a CAR file that may be imported into IPFS for reference by any data that uses that vocabulary. Not incidentally, this should serve as a means of exercising the code paths that convert RDF quad sets into CAR for storage.

SDK > Read and Write paths for http gateway and p2p node configuration

Why

Writes need to go through the CF http gateway. Reads can be via the CF gateway or local node.

How

CF needs to deal with user concern like billing and auth and the hypercore gateway deals with p2p writes and reads:

Client -> Request -> CF Worker -> Hypercore -> CF Worker -> Client

EPIC:
#3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.