Git Product home page Git Product logo

venus's Introduction

Project Venus Logo

Project Venus - 启明星


Venus is an implementation of the Filecoin Distributed Storage Network. For more details about Filecoin, check out the Filecoin Spec.

Building & Documentation

For instructions on how to build, install and join a venus storage pool, please visit here.

Venus architecture

With key features like security, ease of use and distributed storage pool, the deployment of a node using Venus is quite different from the one using Lotus. Details of mining architecture can be found here.

Related modules

Venus loosely describes a collection of modules that work together to realize a fully featured Filecoin implementation. List of stand-alone venus modules repos can be found here, each assuming different roles in the functioning of Filecoin.

Contribute

Venus is a universally open project and welcomes contributions of all kinds: code, docs, and more. However, before making a contribution, we ask you to heed these recommendations:

  1. If the proposal entails a protocol change, please first submit a Filecoin Improvement Proposal.
  2. If the change is complex and requires prior discussion, open an issue or a discussion to request feedback before you start working on a pull request. This is to avoid disappointment and sunk costs, in case the change is not actually needed or accepted.
  3. Please refrain from submitting PRs to adapt existing code to subjective preferences. The changeset should contain functional or technical improvements/enhancements, bug fixes, new features, or some other clear material contribution. Simple stylistic changes are likely to be rejected in order to reduce code churn.

When implementing a change:

  1. Adhere to the standard Go formatting guidelines, e.g. Effective Go. Run go fmt.
  2. Stick to the idioms and patterns used in the codebase. Familiar-looking code has a higher chance of being accepted than eerie code. Pay attention to commonly used variable and parameter names, avoidance of naked returns, error handling patterns, etc.
  3. Comments: follow the advice on the Commentary section of Effective Go.
  4. Minimize code churn. Modify only what is strictly necessary. Well-encapsulated changesets will get a quicker response from maintainers.
  5. Lint your code with golangci-lint (CI will reject your PR if unlinted).
  6. Add tests.
  7. Title the PR in a meaningful way and describe the rationale and the thought process in the PR description.
  8. Write clean, thoughtful, and detailed commit messages. This is even more important than the PR description, because commit messages are stored inside the Git history. One good rule is: if you are happy posting the commit message as the PR description, then it's a good commit message.

License

This project is dual-licensed under Apache 2.0 and MIT.

venus's People

Contributors

0x5459 avatar aboodman avatar acruikshank avatar anorth avatar dependabot[bot] avatar dignifiedquire avatar diwufeiwen avatar elvin-du avatar fatman13 avatar frrist avatar gmasgras avatar hannahhoward avatar hunjixin avatar icorderi avatar ingar avatar laser avatar linzexiao avatar lkowalick avatar mishmosh avatar ognots avatar phritz avatar porcuquine avatar rosalinekarr avatar shannonwells avatar simlecode avatar stebalien avatar travisperson avatar whyrusleeping avatar zenground0 avatar zl03jsj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

venus's Issues

Increase unit test coverage to 80%

Description

We want to add tests to increase coverage to 80% and gain certainty over our code. Currently we are between 60-70%. Use your judgment regarding which tests to add.

Acceptance criteria

Please breakup PRs by package

Ordered in prioritiy:

  • types
  • core
  • node
  • abi
  • commands
  • mining
  • wallet
  • config
  • repo

Risks + pitfalls

  • See (inconclusive) discussion below on coverage vs. focus
  • Avoid tests that lock in a specific implementation; test I/O instead (see contributing.md)

Where to begin

Green CI

  • Disable Appveyor
  • Circle CI happy
    • Lint
    • Test
    • Deps

Enforce more structure on structs

I would like to propose to use something like https://github.com/go-playground/validator to allow simple validation of the content of structs, both on creation and on marshal/unmarshal.

Using the above package we could write something like this

type Block struct {
  RequiredField int `validation:"required"`
  OptionalField uint
}

func NewBlock(f1 int, f2 uint) (*Block, error) {
  out := &Block{
    RequiredField: f1,
    OptionalField: f2,
  }
  err := validate.Struct(out)
  if err != nil {
    return err
  }
  return out, nil
}

logging on failure

In ethereum, if a contract invocation fails, all state is reverted, including any logs. This makes debugging things a nightmare.

We need to make sure that method calls, even ones that fail, should still be able to return some sort of information to the called.

Agree on linting configuration

We currently have most linters that gometalinter provides enabled.

But, to quote @whyrusleeping

Your gometalinter config is excessive, take a look at how ethereums works: https://github.com/ethereum/go-ethereum/blob/master/build/ci.go#L327

So lets decide on which ones we want to have in this issue and make the change.

List of linters currently enabled

  • maligned
  • deadcode
  • dupl
  • errcheck
  • gas
  • goconst
  • gocyclo
  • gofmt
  • goimports
  • golint
  • gotype
  • gotypex
  • ineffassign
  • interfacer
  • megacheck
  • structcheck
  • unconvert
  • varcheck
  • vet
  • vetshadow
  • unparam

List of linters that ethereum runs with

  • vet
  • gofmt
  • misspell
  • goconst
  • gosimple
  • unconvert

Independently we should do something like ethereum does, and run fast linters first, and the slower ones afterwards to fast failures on those during development.

hash out our testing strategy

The comments on f6d09de and #31 make it clear we need to talk about testing strategy. Let's punt for the moment on the discussion of cucumber vs testify, which at this point is an easy switch to make in either direction (let's talk about it once we have the larger issue resolved).

Here's how I see it, let me know if you see it differently.

We have a server, which is the node and the command implementations in Go. We have a CLI that presently doesn't implement much logic, it just dispatches commands to the server and relays results. The server has a lot of state and behavior. There are many ways it could be set up / configured and often we want to tightly control its state and/or behavior for testing so we can assert something very specific about what it does. For example we often want to stub/fake out a service or data structure the server relies on for a test. For this reason comprehensive functional testing of server behavior should test the server directly, in Go. It should test the server directly (as opposed to through a proxy like a CLI) because that's the easy way to be able to have fine-grain control over its state and behavior: by programmatically building a server instance appropriate for whatever test you want to write. This should happen in Go not only because that's the easy way to build a server instance for testing but also for the reason we use Go in the first place: it's powerful and expressive and there are lots of tools and libraries to work with.

The CLI serves two primary purposes. It can (doesn't yet, but probably will) implement sophisticated logic (eg, for making deals and sending data). This sophisticated logic should be tested directly, in Go (and not through a proxy like a CLI) for the same reasons above. The second purpose of the CLI is to relay commands to the server. Since any sophisticated logic is tested directly in Go for both pieces, mostly what we care about is an end-to-end test that verifies that the pieces are hooked up properly and can talk to one another. Beyond "can we send command X and get back what looks like a reasonable response", we shouldn't comprehensively test server functionality from behind the CLI (for the reasons above).

We could write the end to end tests for the CLI in Go or in another language like shell. Personally I have a strong preference for Go -- if we do shell the context switch is hard, it's a terrible programming language, there aren't good packages to build on, etc etc etc. However if we have something tried and true that we can just drop in and if we are not trying to do functional testing of the server beyond verifying e2e they can talk to each other and verifying some of the shelly glue (does it start and stop, etc), I'm OK with it. What I think would be sub-optimal for our ability to write nice tests would be to try to comprehensively functional test the server from on the other side of the CLI.

I do think we need an additional level of testing beyond direct unit functional testing in Go and e2e testing of the CLI. I think we want what we'd call integration tests where we spin up a bunch of nodes and make assertions about them. I understand IPFS has something like this. I'd like to cross that bridge when we get to it, or maybe in this convo after present discussion and the cucumber vs testify discussion.

So @dignifiedquire @whyrusleeping: any major points of departure from the above?

Also I'd like to take a crack at server testing as part of the story I'm doing, I've done quite a bit of this and there are some common patterns we can try.

we should consider auto-generating the spec

The spec is the ultimate commentary on the code, maybe it should live alongside/in the code? We should consider building a tool that generates the spec automatically from repo files, comments & code. Like godoc or javasdoc, but for the spec.

Spitballing here, hear me out. Looking over what's in the spec we have a few different things.

  • General commentary, motivation, and whatnot: this could live in a markdown file we use as the skeleton of the spec document. Additional content could be extracted from the source files into this skeleton.
  • Data structures: it's easy to annotate structures and fields that are top-level concepts either in comments // @TopLeveLConcept StateTree or using actual `pl.spec` annotations on fields, and we could use annotations that map to spec types if we wanted. Labeled comment sections and these data structures could be extracted directly.
  • Algorithms: should be in comments anyway. When we extract this description it's easy to automatically back-link to the code.
  • APIs are easy to annotate and extract. If we had an IDL that generated Go it'd be even easier, but we don't need it.
  • Other kinds of high-level descriptions like security properties: could be drawn from comments, readme/markdown files that live alongside the code

We could include composition hints in the comments or code for how to merge pieces from different source files (/at/Section Foo).

Advantages: reduces risk of diversion between spec and implementation as we implement. You can see what is and is not implemented at a glace (annotations could differentiate them). We could flag changes that affect spec in code reviews.

When we are "done" implementing we could detach the output spec from its inputs and just use it as a stand alone file if we wanted.

Improve run and withDaemon coupeling

Currently one has to manually specify ports, and be careful of calling out to which daemon. This needs to be improved to avoid accidental errors in this area.

I am not sure on the best api for this yet, suggestions are very welcome.

Configuration

Description

If a user configures a filecoin node and restarts, the configurations should persist in the restarted node. See #109 (WIP) for context on nodes, and #45 for details on configuration files.

Acceptance criteria

  • the configuration file is the source of truth for all settings
  • there will be helper commands later on to change it
  • use toml as configuration format for now
  • if there is no config file provided, print a warning and assume the default configuration
  • default location is the home directory .filecoin/config
  • there should be a command line flag to specify the location for the file to look in
  • there should be a command to reload the config file from disk to apply changes

Risks + pitfalls

Where to begin

  • Look at the options for commands.

Use custom types for storage amounts and token amounts

Description

We need types for these 2:

  • storage (something like Bytes)
  • filecoin amounts (something like TokenAmount)

Currently, they are mostly represented by BigInts which is way too permissive. An added benefit of switching to custom types will be that function signatures will show the correct type and not a generic big.Int.

Acceptance Criteria

  • Interacting with these types is done through wrappers. The wrappers provide safer methods, but we still store the actual values.

  • Make it easy to read comparisons, etc.

Risks + Pitfalls

N/A

Where to begin

write the ~10 fundamental interfaces in Go

A lot of thinking and careful design has gone into the fundamental filecoin interfaces. It's important that we capture that intent so that we don't accidentally violate security properties or important design elements (eg, that will be important to build upon down the road). Juan suggested we write the fundamental interfaces in Go and capture in comments their rationale and requirements. The interfaces we write aren't set in stone, but deviations should be carefully considered and we need to preserve the rationale and requirements.

Agree on merge strategy

I am used to squashing PRs, to get a cleaner git history with better descriptions, which I did for #43. But going forward we should agree on one strategy.

I would suggest one of these two

  1. squash commits into a single commit for each PR, by the person merging, or
  2. after approval of the PR the creator of the PR squashes the commits to one or more commits with clean commit messages, before it gets merged

Both these options would allow us to create & maintain a well readable git history, while still being able to iterate fast and have small commits with so and so commit messages when working on something.

Guarantee ordering of newly accepted block delivery to subscribing channel

Description

Currently we use mostly naive go channels for this, but we need to ensure all callers and receivers are in the correct order, no matter what.

Acceptance Criteria

  1. the 'wait for message' code wants to be able to see every message we 'accept'. In this case, we will want to make sure we send every block in order that it appears on the chain (even if we accept a block two ahead, or theres a reorg)

  2. The miner code, that always wants the latest block to mine off of. In this case we really want a slightly different thing than case 1, as we likely don't care about intermediate blocks, we just want to know what to mine on.

Risks + Pitfalls

Where to begin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.