Git Product home page Git Product logo

brokernode's Introduction

Brokernode

Getting Started

The broker node uses Docker to spin up a go app, mysql, required download, and private iota instance (TODO). You must first install Docker.

# To setup this first time, you need to have .env file. By default, use .env.test for unit test.
# Feel free to modify the .env file. Note: we don't check in .env file.
cp .env.test .env

# The database setup is different between dev and prod. Run the following script to change your DB setup.
# NOTE: Please do not check in the changes to your database.yml and docker-compose.yml file.
make docker-setup-dev
# make docker-setup-prod # use this to setup prod.

# Starts the brokernode on port 3000
DEBUG=1 docker-compose up --build -d # This takes a few minutes when you first run it.

# You only need to pass in --build the first time, or when you make a change to the container
# This uses cached images, so it's much faster to start.
DEBUG=1 docker-compose up -d

# Note, don't include `DEBUG=1` if you would like to run a production build.
# This will have less logs and no hot reloading.
docker-compose up --build -d
docker-compose up -d

# Executing commands in the app container
# Use `docker-compose exec YOUR_COMMAND`
# Eg: To run buffalo's test suite, run:
docker-compose exec app buffalo test

# Get a bash shell in the app container
docker-compose exec app bash

# Once in the app container, you can use all buffalo commands:
brokernode# buffalo db migrate
brokernode# buffalo test

#Prometheus Go client library

Monitoring and alerting toolkit https://prometheus.io/docs/introduction/overview/

For new histogram use prepareHistogram() on init service services/prometheus.go defer with histogramSeconds() and histogramData() on body other function

Using the expression browser UI http://localhost:9090/ Let us try looking at some data that Prometheus has collected about itself. To use Prometheus's built-in expression browser, navigate to http://localhost:9090/graph and choose the "Console" view within the "Graph" tab.

As you can gather from http://localhost:9090/metrics, one metric that Prometheus exports about itself is called http_requests_total (the total number of HTTP requests the Prometheus server has made). Go ahead and enter this into the expression console:

http_requests_total This should return a number of different time series (along with the latest value recorded for each), all with the metric name http_requests_total, but with different labels. These labels designate different types of requests.

If we were only interested in requests that resulted in HTTP code 200, we could use this query to retrieve that information:

http_requests_total{code="200"} To count the number of returned time series, you could write:

count(http_requests_total) For more about the expression language, see the expression language documentation.

Using the graphing interface To graph expressions, navigate to http://localhost:9090/graph and use the "Graph" tab.

For example, enter the following expression to graph the per-second HTTP request rate happening in the self-scraped Prometheus:

rate(http_requests_total[1m])


Welcome to Buffalo!

Thank you for choosing Buffalo for your web development needs.

Database Setup

It looks like you chose to set up your application using a mysql database! Fantastic!

The first thing you need to do is open up the "database.yml" file and edit it to use the correct usernames, passwords, hosts, etc... that are appropriate for your environment.

You will also need to make sure that you start/install the database of your choice. Buffalo won't install and start mysql for you.

Create Your Databases

Ok, so you've edited the "database.yml" file and started mysql, now Buffalo can create the databases in that file for you:

$ buffalo db create -a

Starting the Application

Buffalo ships with a command that will watch your application and automatically rebuild the Go binary and any assets for you. To do that run the "buffalo dev" command:

$ buffalo dev

If you point your browser to http://127.0.0.1:3000 you should see a "Welcome to Buffalo!" page.

Congratulations! You now have your Buffalo application up and running.

What Next?

We recommend you heading over to http://gobuffalo.io and reviewing all of the great documentation there.

Good luck!

Powered by Buffalo

Grafana

user: admin password: admin password can change on docker-compose.yml

Config Datasource

http://localhost:3100/datasources

Name: Monitor Type: Prometheus Url: http://localhost:9090 Access: proxy

Config Graph

http://localhost:3100/dashboard/new Add panel - Graph - Monitor - Metrics

create query -> Metrics lookup -> prometheus_ || any_variable Panel data source -> Monitor

variable on metrics

treasures_verify_and_claim_seconds upload_session_resource_create_seconds upload_session_resource_update_seconds upload_session_resource_create_beta_seconds upload_session_resource_get_payment_status_seconds webnode_resource_create_seconds transaction_brokernode_resource_create_seconds transaction_brokernode_resource_update_seconds transaction_genesis_hash_resource_create_seconds transaction_genesis_hash_resource_seconds claim_unused_prls_seconds claim_treasure_for_webnode_seconds check_alpha_payments_seconds check_beta_payments_seconds flush_old_web_nodes_seconds process_paid_sessions_seconds update_msg_status_seconds bury_treasure_addresses_seconds process_unassigned_chunks_seconds purge_completed_sessions_seconds store_completed_genesis_hashes_seconds remove_unpaid_upload_session_seconds update_time_out_datamaps_seconds verify_datamaps_seconds

brokernode's People

Contributors

aaronvasquez avatar astor avatar automyr avatar edmundmai avatar jfarago avatar nepeckman avatar nymd avatar pandora23 avatar pzhao5 avatar rfornea avatar tenisakb avatar tropyx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brokernode's Issues

PRL Burying

https://docs.google.com/document/d/14zidIKgzY1pY95d362T0pHabYTnTcZMnP8GEClgX2wM/edit?usp=sharing

In order to support PRL burying, the user needs to be able to send PRLs to an address given by the alpha brokernode. The alpha brokernode will then give half of this to the beta brokernode where they will each race to embed as many PRLs into the datamap as they can.

There are 3 main components to this:

  1. Broker returns an invoice to webinterface when a new upload-session is created
  2. Webinterface needs to transfer PRL to the address given, and broker needs to keep track of this.
  3. Once the broker has it PRL, it splits it with the beta, and races to embed it into the datamap.

Address memory and/or open connection issues on the hooknode

We have some connections on the hooknode that aren't getting closed. We fixed one of them, but there seems to be at least one more.

We have temporary solutions in place for now, but we should address this in the short to medium term.

Priority: Medium

[Broker] Send PRL to Beta

Once the broker event listener is triggered, It should send the PRL to beta's address.

Previous description:

The brokernode will need to be able to send PRL to other addresses.  

- It will need to send half the PRL to the beta brokernode.  

- It will also need to be able to send PRL to the eth addresses associated with PRL burying and invoke the bury() function on them.  

The above ^ could be two different methods but if they could be combined to one method that would be nice.  



Another developer was working on this issue in the past and left some comments on the axosoft board.  I am copying and pasting those comments here:  

This basically calls the 'make transaction' function from the other task, passing the beta node's address and the amount owed.
 The input are the two addresses and the total amount.  It divides this in half and sends half to the beta broker node.


I'm doing all the tests in geth and we are configuring everything that rpc does to be able to do the JSON requests that will allow us to establish a connection with the wallet and be able to ask for the parameters they need such as balance sheets, blockchain information, blocks etc
that a bitcoind daemon was configured to do these tests and that I am working on finding the alternative to be able to transfer eth percentages in which the 50% commission for the currency was sent to the betabroker.
Explain that solidity does not yet support floating point operations, so we are looking for other alternatives for percentage functions

I'm testing and so far, I can not send the percentage corresponding to 2 node
investigating documentation I see that
Solidity does not support floating-point numbers, so when you are dealing with tokens that is why you have the decimals state variable.

surely that's why he did not send half

The only option I see is to capture the ethers that he sends, and make a logic before going through the contract

What I'm not sure if it allows to do this solidity for security, to be able to manipulate information before it passes through the smart contract
I explain?

specifically for the token distribution, it has a function or variable that is decimals but only applies to what would be the ICO

It can partially handle decimals. The issue is that if the user sends 0.004 bitcoins and half is left for the BetaBroker, there is no chance to calculate that by these methods

Also, since 4/5 obviously works out to 0 remainder 4, possibly you're really aiming for something more like 80%?

You can bump up the accuracy by working with "decimal" as is common in financial markets. You pass pairs around, the digits and the decimal offset.

So, 1,234 would be written as [1234, 3] with the 3 indicating that there is a decimal in the third spot.

In this case, you would calculate 80, with 2 decimal places (meaning .80).

4 10 * 2 = 400, 400/5 = 80, we raised 4 by 10 ^ 2, so return ... 80,2 and optionally, to remainder.

I can do an interesting function to calculate division. It is a possible solution to add decimals but it ends up being static, it does not work, that is, you have to indicate that the decimal point and we are in it. :)

https://github.com/oysterprotocol/smartcontract/commit/aba8bf78b24147a606efb07518e23081d7b40310  

Use go dep for dependency management

This would allow for faster docker builds since we can cache resolving dependencies with a lock file.

It's also a good idea to lock dependency versions so we don't get unexpected behavior when a dependency changes.

Revisit payment system--wait for PRL to arrive

At the moment we are going to have the brokernode just check that a transaction exists on the ETH blockchain, and that everything about the transaction is correct ("to" address, amount, etc.), and will go ahead and let the user start the upload.

We may want to revisit this and explicitly have the broker wait until it actually has the money first, especially for very large and expensive uploads.

Maximize data per chunk

We should be able to store somewhere between 1 kb - 1.3 kb data per chunk on the tangle. Currently we appear to only be storing around 0.6 kb per chunk. We need to figure out what is causing this and change it.

Possible causes:
-Something to do with the base 64 conversion
-the encryption we are using

There may be a way to tell the base 64 conversion or the encryption to produce output of a certain length, or we might can find a way to not use the base64 at all.

As a developer, I want to dock a hooknode's score if it responds with "unavailable"

This story should be done after we have implemented a method on the broker to determine if a hooknode is "ready."

Hooknodes are not supposed to be moonlighting and they are not supposed to lie about their stats. Also, our method of determining hooknode "readiness" should be reliable. If a hooknode responds with "unavailable" when we attempt to send it chunks, dock its score.

This is so that we can:
-punish hooks that moonlight for other brokers or lie about their score.

[Broker] Generate a new ETH address for PRL treasure

A unique address is used to identify which upload_session a transfer was sent for. When an upload-session is created, the broker will also need to create and save a unique ETH address. This will be stored in the upload_sessions table.

Note: The beta broker will also have a unique eth address for the alpha broker to send PRL to.

[Broker] Update POST /upload-sessions API to handle PRL payments

  1. Extra param in request storageLengthInYears
  2. Response will have invoice: { cost, ethAddress }
  3. Make same changes to beta broker.
  4. Broker needs to store ethAddressAlpha, ethAddressBeta, and totalCost in the DB.

The required PRL amount will be based on file size, storage length (in years) and a hardcoded peg.

[Broker] Modify DB to support PRL Burying

Add the following fields to upload_sessions table:

  1. eth_address_alpha (unique index),eth_address_beta (unique_index), total_cost, payment_status (paid, unpaid), treasure_idx_map

Note: total cost is cost given to storage user, so beta broker can expect to receive half.

Add obfuscation to the building of data maps.

Currently to build our data maps, we start with the genesis hash and get a sha256 hash of it, and then continue building the chain getting hashes of the previous hash, until we have as many as we need. Then we convert each of these values to trytes, and those tryte values are the addresses we use for the tangle.

There is a problem with this. If an actor knew one of our iota addresses, they could convert that tryte value back to ascii characters. Then they could get a sha256 hash of that ascii string, and keep hashing each result, building out the hash chain. Then they could convert all those hashes to trytes, and they'd basically have totally rebuilt the data map starting from the address they originally had.

The reason this problem is an issue (among others) is that when the webnode wants to buy something from the broker and the broker sends it a chunk from an ongoing upload, the webnode could do the work and then effectively be able to build most of the data map on its own without having to buy genesis hashes.

So, we need to change how we are building our data maps. We will add an extra step.

Instead of this:
Sha256 Hash ---> (trytes conversion) ---> iota address

We will do something like this:
Sha256 Hash ---> (some other sha algorith) ---> New Hash --->(trytes conversion) ---> iota address

Sha384 is probably the most sensible choice for the new sha algorithm to use.

Obfuscation.png

Smarter chunking to handle large files

If a file is too large, it won't fit into memory at once, so we need to chunk it smartly. We should be careful about creating consistent chunks for the alpha and beta brokers (encrypting from 2 different starting points)

Setup Failure php artisan migrate

I just cloned master. All steps were successful until the following:
php artisan migrate

The script responds with a production warning (should it be set to production)?
After selecting 'yes' the following response is:

In Connection.php line 664:

` SQLSTATE[HY000] [2002] Connection refused (SQL: select * from information_schema.tables where
table_schema = forge and table_name = migrations)
In Connector.php line 67:

SQLSTATE[HY000] [2002] Connection refused`

No .env file is present. Creating one and adding a DB_HOST property doesn't solve the problem. Am I missing a step?

Mac High Sierra 10.13.2
Used brew directions
Docker version: 17.12.0

As a developer, I want to update data_map status options

Re-add status “pending” and swap the order of “pending” and “unassigned.”

What these statuses mean:
"pending" - data map entry has been created for that chunk, but have not yet received all the data from the client for that chunk. This is the status the moment a row in the data_map table is created.
"unassigned" - have received all the data for a chunk, but nobody is assigned to it yet. This is what the brokernode sets a chunk to once it actually receives the chunk from the client.

This is so that:
We have something to check for when getting webnode work. When we remove hooknode queueing there will be chunks which are ready to go, but are waiting for assignment. The API for the webnode needs a status it can check for to give work to the webnode.

Priority: High

[Discussion] PoW separate from hooknodes?

Transactions will appear on the tangle quicker if the trytes array is broadcast on multiple iota nodes (can be brokernodes, hooknodes, or public iota nodes that allow use of the broadcastTransactions method). Who will call other nodes and ask them to broadcast the transaction? Will we use our own hooknodes or attempt to use public nodes?

Discuss/Decide: Have the Go attachToTangle called on the hooknodes, or separate this out to a PoW machine?

Pros of separating to separate PoW machines:

  • Will only need about 10 of these and can get rid of lots of our hooknodes
  • Since we don’t need many, can make them very powerful
  • Separation of concerns
  • We can also delegate the PoW on PaaS or FaaS instances, thus adding scalability to the whole system.

Cons:

  • “Centralized”
  • More http requests
  • Major change to the protocol and the economic structure of the system
  • Would have to update a lot of old code

Remove Spam Hack

Once we decide Go hooknodes are stable enough, we might no longer need to spam to get faster transactions. We should disable all the hacks we had to speed up transactions.

Remove the 'getTransactionsToApprove' call on the brokernode before passing chunks to hooknode.

-Don’t call getTransactionsToApprove on the brokernode anymore prior to passing the chunks to the hooknodes. The hooknodes will make this call themselves.
-Don't need to store 'trunkTransaction' and 'branchTransaction' in the data_map table for chunks anymore.
-Currently the brokernode has the ability to verify that a trunk and branch on the tangle matches what it has in its data_map. It no longer needs this ability, remove the code that deals with this. It must still verify the message matches.

There will be a corresponding issue for the hooknodes, to add 'getTransactionsToApprove' calls to the hooknodes. Try to push the changes for both issues into their respective master branches at the same time.

This is so that we can achieve these goals:
-Making the hooknodes call 'getTransactionsToApprove' takes a lot of burden off the brokernode
-Use tips that come from all throughout the tangle
-Potentially increase our confirmation rates due to less delay between the tips getting selected and actually getting used.

Priority: Medium

Better PoW for hooknode

Need to fork giota lib since attachToTangle doesn't take in a pow algorithm.

I think we'll probably want to string together some combination of getBestPow, doPow, broadcastTransactions and storeTransactions

Build the data map asynchronously so we don't get timeouts

Currently the client waits for the broker to finish building the data map, which causes the client to experience timeouts with file sizes over a certain size. The bottleneck should not be the actual calculation of the hashes, but rather storing them to the db.

Building the data map asynchronously could have the following issues:
Andrew: The big problem with this one is that the beta broker is starting from the end, thus it would start well after the alpha broker and have to catch up.

Here are some thoughts from Telegram of how to approach this problem:
Bruno Block admin:
it should be async
send to DB but dont wait for confirmation
Automyr:
We could hash to the end and then build the datamap as the upload
progresses
Bruno Block admin:
bulk INSERT statements usually help
Rebel:
Ok
Bruno Block admin:
I think INSERT IGNORE makes the difference?
"Automyr: We could hash to the end and then build the datamap as the upload pr…"
This makes sense
the datamap should get built as chunks are received

The goal for this task is to implement something that satisfies the following:
-build data map asynchronously so client can go ahead and start sending chunks rather than waiting for the broker to be done saving the data map to its DB
-do this in such a way that the beta broker is not penalized relative to the alpha broker

This is so that:
-we can send files of any size without timeout issues due to data map building

Priority: Medium

Don't automatically add all chunks to the queues/channels on the brokernode

This issue was written at a time when we were still going to be sending the work to the hooknodes. Now, the brokers are just going to be supercomputers and will do the PoW themselves.

The basic principles of this issue remain the same--we don't want to just automatically assign all chunks to a queue/channel. When there isn't a channel that is "free" most of the chunks should remain "unassigned" so that webnodes will have a chance to grab them.

Complete this issue basically the same way as is specified below, except instead of sending to hooknodes, we will send to a brokernode's channel/queue.

Additional requirements:
-check # of cores on startup and allocate (numCores - 1) channels/queues for chunk processing
-the job should call the following methods: findTransactions (to make sure the other broker hasn't already attached the chunks), prepareTransfers, getTransactionsToApprove, attachToTangle/PoW
-can add methods/callbacks to structs and models. Add some kind of "SendTo___" method to the struct for the Jobs/channels. This should abstract away the logic to add a batch of chunks to that channel. When we are ready to re-add the hooknodes, we can just change the method that we call to a different method that makes the request to the hooknode
-keep all channel/job related code clearly separate from the rest of the code for easy removal later
-as much as possible, try to write the code in such a way that it will be easy to replace the channels with hooknodes in the future.


Old content, complete the issue more-or-less the same way except with job queues on the broker instead of hooknodes.

Currently we are just sending all chunks to the hooknodes, which has the following negative effects:
-doesn't give the webnodes much of a chance to do any work
-results in hooknodes potentially using old tips
-potentially causes hooknodes to lose points if the brokernodes send them more transactions that they can handle before the brokernode declares the chunks "timed out"

Thus we need a way to maintain some chunks in the datamap and only send to hooknodes when they are ready. Even if no webnodes come along, speed should not be significantly impacted if our method of figuring out when hooknodes are "ready" is reasonably accurate.

We will add a new field to the hooknodes table, "estimated_ready_time"

The way the brokernode will populate this "estimated_ready_time" column is as follows:
-once the brokernode sends the hooknode X chunks, it will take the average amount of time Y that the hooknode spends per chunk, and add it to the hooknode's preferred buffer time Z. The brokernode will then set the "estimated_ready_time" of the hooknode to:

estimated_ready_time = currentTime + (X * Y) seconds + Z seconds

In subsequent attempts to send chunks to hooknodes, the brokernode will only look at hooknodes whose "estimated_ready_time" has already passed.

There are different ways you could enable to the brokernode to get the "average time per chunk" but my suggestion is the following:

When the hooknode responds with "accepted_work" and its load status, it also sends an array of objects, where each object has a # of chunks and a time elapsed for that batch of chunks in secons. A hypothetical example of the array might look like this:
{
num_chunks: 10,
elapsed_time: 50
},
{ num_chunks: 7,
elapsed_time: 40
}....etc.

The brokernode will traverse this array, total up the number of chunks, total up the time elapsed in seconds, and calculate an "average time per chunk" for the hooknode.

The hooknode will also have an environment variable, with a name such as "POW_REQUEST_BUFFER_TIME" and it will send this to the brokernode as well, in the same response that it sends the "accepted_work" response and the array of objects. The brokernode will multiply however many chunks it just sent by the the average time per chunk, and add the buffer time, to calculate the estimated ready time, and update the hooknode's entry for "estimated_ready_time" in the db table.

You could also just calculate the average time per chunk on the hooknode and send the already calculated value to the brokernode. The point is simply that the broker can calculate when it thinks the hooknode will be ready based on num chunks, average time per chunk, and the hooknode's preferred buffer time.

This is so that:
-we aren't overwhelming the hooknodes with chunks and bringing down their scores
-we have some work to give to the webnodes

Priority: High

DELETE THIS

Whoever takes up getting the async stuff working properly will need the following information.
The branch I am referring to is the webnode branch pow-tests. The issue is that I don't really know how to handle asynchronicity the proper way in redux.

The response from the call of this function:
https://github.com/oysterprotocol/webnode/blob/pow-tests/directories/src/components/storage/index.js#L60

Is needed to be passed into this function:
https://github.com/oysterprotocol/webnode/blob/pow-tests/directories/src/components/storage/index.js#L88

Also take a look at this function call:
https://github.com/oysterprotocol/webnode/blob/pow-tests/directories/src/components/storage/index.js#L90

I'm getting the result of the PoW and passing it to a callback, then in the callback I'm calling broadcastToHooks. This works fine, I'm just not sure if that's an encouraged pattern in redux. Feel free to change it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.