Git Product home page Git Product logo

web3-provider-engine's Introduction

Web3 ProviderEngine

Web3 ProviderEngine is a tool for composing your own web3 providers.

Caution

This package has been deprecated.

This package was originally created for MetaMask, but has been replaced by @metamask/json-rpc-engine, @metamask/eth-json-rpc-middleware, @metamask/eth-json-rpc-provider, and various other packages.

Here is an example of how to create a provider using those packages:

import { providerFromMiddleware } from '@metamask/eth-json-rpc-provider';
import { createFetchMiddleware } from '@metamask/eth-json-rpc-middleware';
import { valueToBytes, bytesToBase64 } from '@metamask/utils';
import fetch from 'cross-fetch';

const rpcUrl = '[insert RPC URL here]';

const fetchMiddleware = createFetchMiddleware({
  btoa: (stringToEncode) => bytesToBase64(valueToBytes(stringToEncode)),
  fetch,
  rpcUrl,
});
const provider = providerFromMiddleware(fetchMiddleware);

provider.sendAsync(
  { id: 1, jsonrpc: '2.0', method: 'eth_chainId' },
  (error, response) => {
    if (error) {
      console.error(error);
    } else {
      console.log(response.result);
    }
  }
);

This example was written with v12.1.0 of @metamask/eth-json-rpc-middleware, v3.0.1 of @metamask/eth-json-rpc-provider, and v8.4.0 of @metamask/utils.

Composable

Built to be modular - works via a stack of 'sub-providers' which are like normal web3 providers but only handle a subset of rpc methods.

The subproviders can emit new rpc requests in order to handle their own; e.g. eth_call may trigger eth_getAccountBalance, eth_getCode, and others. The provider engine also handles caching of rpc request results.

const ProviderEngine = require('web3-provider-engine')
const CacheSubprovider = require('web3-provider-engine/subproviders/cache.js')
const FixtureSubprovider = require('web3-provider-engine/subproviders/fixture.js')
const FilterSubprovider = require('web3-provider-engine/subproviders/filters.js')
const VmSubprovider = require('web3-provider-engine/subproviders/vm.js')
const HookedWalletSubprovider = require('web3-provider-engine/subproviders/hooked-wallet.js')
const NonceSubprovider = require('web3-provider-engine/subproviders/nonce-tracker.js')
const RpcSubprovider = require('web3-provider-engine/subproviders/rpc.js')

var engine = new ProviderEngine()
var web3 = new Web3(engine)

// static results
engine.addProvider(new FixtureSubprovider({
  web3_clientVersion: 'ProviderEngine/v0.0.0/javascript',
  net_listening: true,
  eth_hashrate: '0x00',
  eth_mining: false,
  eth_syncing: true,
}))

// cache layer
engine.addProvider(new CacheSubprovider())

// filters
engine.addProvider(new FilterSubprovider())

// pending nonce
engine.addProvider(new NonceSubprovider())

// vm
engine.addProvider(new VmSubprovider())

// id mgmt
engine.addProvider(new HookedWalletSubprovider({
  getAccounts: function(cb){ ... },
  approveTransaction: function(cb){ ... },
  signTransaction: function(cb){ ... },
}))

// data source
engine.addProvider(new RpcSubprovider({
  rpcUrl: 'https://testrpc.metamask.io/',
}))

// log new blocks
engine.on('block', function(block){
  console.log('================================')
  console.log('BLOCK CHANGED:', '#'+block.number.toString('hex'), '0x'+block.hash.toString('hex'))
  console.log('================================')
})

// network connectivity error
engine.on('error', function(err){
  // report connectivity errors
  console.error(err.stack)
})

// start polling for blocks
engine.start()

When importing in webpack:

import * as Web3ProviderEngine  from 'web3-provider-engine';
import * as RpcSource  from 'web3-provider-engine/subproviders/rpc';
import * as HookedWalletSubprovider from 'web3-provider-engine/subproviders/hooked-wallet';

Built For Zero-Clients

The Ethereum JSON RPC was not designed to have one node service many clients. However a smaller, lighter subset of the JSON RPC can be used to provide the blockchain data that an Ethereum 'zero-client' node would need to function. We handle as many types of requests locally as possible, and just let data lookups fallback to some data source ( hosted rpc, blockchain api, etc ). Categorically, we don’t want / can’t have the following types of RPC calls go to the network:

  • id mgmt + tx signing (requires private data)
  • filters (requires a stateful data api)
  • vm (expensive, hard to scale)

Running tests

yarn test

web3-provider-engine's People

Contributors

2-am-zzz avatar anxolin avatar axic avatar benjamincburns avatar danfinlay avatar dependabot[bot] avatar dylanjw avatar fabioberger avatar flyswatter avatar frankiebee avatar gislik avatar greenkeeper[bot] avatar gudahtt avatar jrainville avatar kumavis avatar legobeat avatar levity avatar logvik avatar logvinovleon avatar mcmire avatar metamaskbot avatar mhhf avatar mvayngrib avatar pixelmatrix avatar rekmarks avatar rickycodes avatar tcoulter avatar ukstv avatar whymarrh avatar wjmelements avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

web3-provider-engine's Issues

vm subp - relies on currentBlock

tried to run eth_call but the currentBlock wasnt set yet

   [Error: TypeError: Cannot read property 'parentHash' of undefined
       at blockFromBlockData (/Users/kumavis/dev/web3-provider-engine/subproviders/vm.js:267:38)
       at VmSubprovider.runVm (/Users/kumavis/dev/web3-provider-engine/subproviders/vm.js:66:15)
       at VmSubprovider.handleRequest (/Users/kumavis/dev/web3-provider-engine/subproviders/vm.js:33:8)
       at next (/Users/kumavis/dev/web3-provider-engine/index.js:93:18)
       at FilterSubprovider.handleRequest (/Users/kumavis/dev/web3-provider-engine/subproviders/filters.js:76:7)
       at next (/Users/kumavis/dev/web3-provider-engine/index.js:93:18)
       at DefaultFixtures.FixtureProvider.handleRequest (/Users/kumavis/dev/web3-provider-engine/subproviders/fixture.js:25:5)
       at next (/Users/kumavis/dev/web3-provider-engine/index.js:93:18)
       at /Users/kumavis/dev/web3-provider-engine/subproviders/cache.js:98:5
       at BlockCacheStrategy.hitCheck (/Users/kumavis/dev/web3-provider-engine/subproviders/cache.js:211:12)],

"Emulate"/"Substitute" provider

I think it would make sense to move some of the substitution code into its own provider. By this I mean official RPC methods which can be emulated by other RPC methods.

One of them is the recent eth_getLogs in etherscan. Maybe there are other which fit the bill?

Breaking Change - error handling

ProviderEngine previously appended errors to the result object without including an error in the callback response. It will now provide an error and include the error in the result object.

so in summary:

  • json rpc has 1 ‘channel’, the json response blob
  • the callback has 2 ‘channels’, the error field and the json response blob

we had problems when we:

  • presented the json response channel incorrectly
  • presented the error field channel incorrectly

so we’ll redundantly present both channels correctly

Here is some musings on the topic.

appending the error to the result is really confusing when theres an error parameter in the callback

tim [11:45 AM] 
Let me point you to the JSON RPC spec.

kumavis [11:46 AM] 
im familiar with the json rpc spec

[11:46] 
let me point you to the callback spec?

tim [11:46 AM] 
See #5: http://www.jsonrpc.org/specification

kumavis [11:46 AM] 
we’re kinda at an impasse there, theres no correct answer

tim [11:47 AM] 
At that point in the code, the error object is sent back to web3. Web3 implements the spec, and expects an `result.error`. Not having `result.error` will cause other issues that hide the real problem, because your result won’t be formatted correctly.

kumavis [11:47 AM] 
this is of course really a design failure of our favorite love-to-hate library

tim [11:47 AM] 
kumavis: This isn’t a web3 issue, if that’s what you mean.

[11:47] 
This is the broader JSON RPC spec.

kumavis [11:47 AM] 
it is what i mean

[11:48] 
theres more than the json rpc spec going on at this point in the stack

tim [11:48 AM] 
Not the Ethereum JSON RPC. But the general JSON RPC

kumavis [11:48 AM] 
theres an error first callback

tim [11:49 AM] 
kumavis: At that point in the code, the provider engine has wiped its hands of anything that has happened.

[11:49] 
kumavis: The provider engine implements the RPC spec, and should follow it.

[11:49] 
You break dapps and web3 otherwise.

kumavis [11:50 AM] 
yeah but it also implements error first callbacks

[11:50] 
i dont really see how the rpc spec trumps the callback format

tim [11:50 AM] 
kumavis: Where’s that error-first callback given to?

kumavis [11:51 AM] 
its used in places like this https://github.com/ethereum/web3.js/blob/0ae82cf895b74b264d309d21feecbf3645713d0b/lib/web3/requestmanager.js#L80-L89

GitHub
ethereum/web3.js
web3.js - Ethereum Compatible JavaScript API

tim [11:51 AM] 
(this question is meant to be rhetorical)

kumavis [11:51 AM] 
and if you notice its handling the error-first case

tim [11:51 AM] 
Hmm.

kumavis [11:52 AM] 
though perhaps differently than if it was attached to the response obj

tim [11:52 AM] 
I think `.error` is handled further up, but let me check. (edited)

kumavis [11:52 AM] 
nah its that line 85-87 there

[11:52] 
wells its other places as well

[11:53] 
i realized this bc we actually have some failing tests

[11:53] 
that we missed bc of the json error format / callback error format mixup

tim [11:54 AM] 
Hmm. The only reason I’m arguing so strongly for the spec is that I’ve had the opposite problem: Missed errors because various responses didn’t comply with the spec.

kumavis [11:55 AM] 
like web3 was dropping the err?

tim [11:56 AM] 
If my memory serves, yes. But that `callback(null, result.result);` line doesn’t seem to back that up.

[11:57] 
Let me study this code more.

tim [12:03 PM] 
kumavis: Ah, I found where I ran into the issue. It’s when the provider engine is running server side.

[12:04] 
Error-first callbacks are great when the provider engine and the web3 instance are running in the same context. But when running server side, you’re then requiring everything that implements the provider engine to format error responses correctly against the spec, which likely won’t happen.

[12:04] 
When a badly formatted error response is sent over the wire, web3 drops it.

kumavis [12:04 PM] 
why is it a platform problem?

tim [12:05 PM] 
> When a badly formatted error response is sent over the wire, web3 drops it.`
(edited)

kumavis [12:05 PM] 
web3 is ignorant of the wire

[12:05] 
so what exactly do you mean?

tim [12:05 PM] 
Ya, let me write it out more specifically.

[12:06] 
1. Web3 sends a payload via an XMLHttpRequest. Properly formatted JSON blob.

[12:06] 
2. When the server-side provider engine respond to it, currently (without your changes) if there’s an error, it will output a properly formatted json blob which you can send back over the wire. (edited)

kumavis [12:07 PM] 
ah ok

tim [12:08 PM] 
3. With your changes, whatever’s wrapping the provider engine will need to catch your errors first error, and then create a properly formatted json blob itself, which is prone to error or people just won’t do it out of ignorance to the issue.

[12:08] 
4. Web3 either won’t get the error or it will get an Error serialized as a string, which will be squashed as an “invalid response” because it’s not properly formatted.

[12:08] 
(done)

kumavis [12:09 PM] 
done(null, result) :laughing:

[12:09] 
ok so as you said, i suggest that it is the responsibility of the server thats responding to the error

[12:09] 
your concern that people will not handle it correctly is correct

[12:10] 
but we have that problem on the other side as well - that is, we werent even handling it correctly in our tests where we check the result (edited)

tim [12:11 PM] 
kumavis: I don’t understand: The provider engine merges errors-first errors into JSON RPC errors. These errors are then handled by web3 correctly.

[12:11] 
How were they missed?

kumavis [12:11 PM] 
in the tests we’re looking at the error in the callback to make sure everything went smoothly (edited)

[12:12] 
but things were actually broken and the error was on the result

tim [12:12 PM] 
kumavis: How about a truce: Send both the error ​*and*​ the result properly formatted with an error? (edited)

[12:12] 
i.e.,

kumavis [12:12 PM] 
yeah, i’m considering that

[12:12] 
i think it might be the best answer

tim [12:13 PM] 
        resultObj.error = {
          message: error.stack || error.message || error,
          code: -32000
        }
        finished(error, resultObj)


[12:13] 
Agreed.

kumavis [12:14 PM] 
so in summary:
* json rpc has 1 ‘channel’, the json response blob
* the callback has 2 ‘channels’, the error field and the json response blob
we had problems when we:
* presented the json response channel incorrectly
* presented the error field channel incorrectly
so we’ll redundantly present both channels correctly (edited)

tim [12:15 PM] 
Ugh. I still need to change testrpc’s error handling even if you do that.

kumavis [12:15 PM] 
whysthat?

tim [12:15 PM] 
            provider.sendAsync(payload, function(err, result) {
              if (err != null) {
                headers["Content-Type"] = "text/plain";
                response.writeHead(500, headers);
                response.end(err.stack);
              } else {
                headers["Content-Type"] = "application/json";
                response.writeHead(200, headers);
                response.end(JSON.stringify(result));
              }
            });


[12:15] 
I don’t want to 500 on a handled error.

kumavis [12:16 PM] 
yeah ill bump a major version

tim [12:16 PM] 
Well, what I mean is, when I upgrade to your new version.

[12:16] 
It actually makes things more complicated - because of the two channels.

[12:17] 
This is the server-side perspective, of course.

kumavis [12:17 PM] 
not just server side but reserializing the provider result — ill have this same issue in the dapp-extension bridge

[12:18] 
server side sounds like its a node vs browser platform issue

tim [12:18 PM] 
Well, JSON RPC was meant to be serialized - so there was only ever one channel to begin with. (in the strict sense; it was meant to go over the wire) (edited)

kumavis [12:18 PM] 
caution: lanes merge ahead (edited)

[12:19] 
the json rpc spec makes sense

[12:20] 
just gets confusing when you wrap it with other standards

tim [12:20 PM] 
Right.

[12:20] 
I should probably try/catch my response anyway - that’s when it should 500. (edited)

[12:20] 
Remove that if statement.

kumavis [12:20 PM] 
try/catching async tho

[12:21] 
slippery fish

tim [12:21 PM] 
kumavis: Well right - that’s the only time an error won’t come through the channel I expect.

[12:21] 
kumavis: Assuming we “merge channels” like you suggested, I can just rely on the result object to correctly handle the error, and I could forward that along.

[12:22] 
kumavis: The testrpc should 200 when a successful request was made, even if there was an error. (because the rpc has its own error channel, and so it ​*was*​ a successful request) (edited)

[12:22] 
500 only when something totally unexpected happens.

kumavis [12:22 PM] 
yeah makes sense

[12:22] 
want me to file an issue on testrpc

tim [12:23 PM] 
Sure.

nonce-tracker

in order to guarantee an accurate value, we should block while sendRawTransaction is inflight.

Defining non-standard RPC method for transaction list

The official RPC doesn't (and probably will never) support a method to retrieve transaction list associated with an address.

I think it could make sense having a non-standard method for that. There are multiple ways to create that data:

  • using a data provider like etherscan/etherchain/ethercamp
  • hooking into the block polling
  • building someone's own service for that
  • testrpc should be able to provide it too

The reason having a fixed API is useful:

  • client code needs to be written once
  • current data providers can be used
  • there's a skeleton to work with when creating someone's own private service

Suggestion: zeroclient_getTransactions which returns an array of transactions

Sample response: http://api.etherscan.io/api?module=account&action=txlist&address=0xde0b295669a9fd93d5f28d9ec85e40f4cb697bae&sort=asc

Cache - broken transaction caches

Certain caches currently marked 'perma cache' need a lower caching level.
For example eth_getTransactionByHash has a block reference and will return with (incomplete information) while still pending. Once its mined into a block the block reference is added but we've already cached the result. Also note that the block reference could change at any time due to forks.

eth_getCode caching strategy

We should cache eth_getCode requests in such a way that:

  • all eth_getCode requests that specify a block after the code was added to the chain hits the cache; and
  • all eth_getCode requests that specify a block before the code was added to the chain return null (possibly cached)
  • all null responses don't overwrite the cached versions above

filter race condition

problem:
problem is you can request filter results on the new block before they've finished fetching from the network.

need to:

  • some kind of filter -> readiness mapping (maybe weak maps)?
  • unready the filter
  • process update
  • ready filter ready

batch subprovider

rebatch requests just before hitting the network - could reduce network load

cache - forks

from the chats:

eth_getTransactionByHash should be fork type not perma, now that i think of it
bc if the block the tx was included in is "forked out", the tx will settle in another block
but have the same tx hash

its quite an edge case but eth_getCode result could also change if the contract constructor included something like a blockNumber or coinbase opcode and there was a fork

"pending" not being handled correctly

against provider engine:

curl -d '{"jsonrpc":"2.0","method":"eth_getTransactionCount","params": ["0x18a3462427bcc9133bb46e88bcbe39cd7ef0e761", "pending"], "id":1}' -X POST https://testrpc.metamask.io
{"id":1,"jsonrpc":"2.0","result":null}

against geth:

curl -d '{"jsonrpc":"2.0","method":"eth_getTransactionCount","params": ["0x18a3462427bcc9133bb46e88bcbe39cd7ef0e761", "pending"], "id":1}' -X POST https://rawtestrpc.metamask.io
{"id":1,"jsonrpc":"2.0","result":"0x100000"}

refactor - provider-subprovider parity

here are some code snippets from our discussion on how we would like to see this stuff work.

provider and subprovider api match via boilerplate in base class:
(not shown: error handling)

BaseClass.prototype.sendAsync = function(payload, end, next){
  this.handleRequest(payload, function(err, result){
    return {
      result: result
    }
  }, function(){
    if (!next) cb(new Error('Method not supported'))
  })
}

ChildClass.prototype.handleRequest(payload, end, next){
  end(null, '0x1234')
}

should be able to do this, albeit for limited rpc methods:

new Web3(subprovider)

external blockTracker (solo):

new BlockTracker(anyProvider)

external blockTracker (engine stack):

var engine = new Engine()
var blockTracker = BlockTracker(engine)

// important to have the blockTracker for reactive block events
engine.add(blockTracker)
engine.add(CacheProvider({blockTracker: blockTracker}))
engine.add(HttpProvider())

blockTracker.start()

Block Polling

  • factor out into provider
  • currently asking for latest block, should increment from last known block instead
  • add parameters for block polling interval
  • fork detection ( emit('fork', lastCommonBlock) )

ES6 Module Consumption

When attempting you use a modern bundler (in my case its Rollup http://rollupjs.org/).
I cannot import individual modules that I need because they aren't exported correctly.

Module MY_PROJECT/node_modules/web3-provider-engine/subproviders/rpc.js does not export default (imported by MY_PROJECT/lib/index.js)

Error: Module MY_PROJECT/node_modules/web3-provider-engine/subproviders/rpc.js does not export default (imported by MY_PROJECT/lib/index.js)

at Module.trace (MY_PROJECT/node_modules/rollup/src/Module.js:683:30)
at MY_PROJECT/node_modules/rollup/src/Module.js:265:30
at Array.forEach (native)
at MY_PROJECT/node_modules/rollup/src/Module.js:263:25
at Array.forEach (native)
at Module.bindReferences (MY_PROJECT/node_modules/rollup/src/Module.js:256:19)
at MY_PROJECT/node_modules/rollup/src/Bundle.js:104:44
at Array.forEach (native)
at MY_PROJECT/node_modules/rollup/src/Bundle.js:104:18

Too many listeners

When sending a transaction, I get an error:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.

It's coming from the Filter subprovider onFilterChange _ready.on method. That _ready object needs its listener max raised, or has a memory leak.

Cache subprovider should return a clone of the response

I'm running into errors in my application where something, somewhere, is changing hex values on request result objects to numbers. I don't think this is the cache provider, and likely not provider engine (though it could be) but it wouldn't be an issue if the cache provider returned a deep clone of the result instead of the result object itself.

The bug I'm seeing looks like this:

  1. Make request, cache miss, result saved.
  2. result sent out of the provider and into the app; app makes changes to result object.
  3. Make request, cache hit, result object returned to app (the same one as before)
  4. App gets the edited result object, expectations differ, blows up.

I understand that the app shouldn't edit the result object, but we at least have the power to prevent this by returning a clone each time.

Simplify hooked wallet subprovider

Right now the hooked-wallet-subprovider requires four methods to be implemented:

  • sendTransaction
  • sign
  • approveTx
  • approveMsg

The first two are pure equivalents of web3.eth methods, and the second two are used to prompt the user for confirmation, and call back with their responses.

This was confusing to a new user of provider-engine, and once I read it, it seemed strange to me, too.

Why not just provide the pure web3 methods, and let the consumer deal with approving or throwing an error? It seems like this would make the module easier to understand for new users, while making it more flexible, too.

Safari crashes during block polling

When I leave my provider polling in Safari, the tab crashes. I followed these instructions to get the names of JavaScript functions in my stack trace, and it looks like it's happening inside _handleAsync's end function. I don't understand what stack is being used for, so I can't figure out if provider-engine is doing something that might bother Safari or if I've just hit a weird corner case. My top theories right now are provider-engine and webpack's process.nextTick shim (implicated in the stack trace below).

I think the only useful help at this point would be an explanation of stack. After that, we can close this until someone else runs into this madness.

(lldb) btjs
* thread #1: tid = 0x1a9c, 0x00007fff97474e23, queue = 'com.apple.main-thread, stop reason = EXC_BAD_ACCESS (code=1, addrep
    frame #0: 0x00007fff97474e23 JavaScriptCore`JSC::buildGetByIDList(JSC::ExecState*, JSC::JSValue, JSC::Identifier const&, JSC::PropertySlot const&, JSC::StructureStubInfo&) + 243
    frame #1: 0x00007fff96faa727 JavaScriptCore`operationGetByIdBuildList + 1751
    frame #2: 0x0000314c17a4a79d nextTick#D4b61W [Baseline](Cell[JSLexicalEnvironment ID: 111]: 0x1069cf5d0, Cell[Function ID: 41]: 0x10952c5b0)
    frame #3: 0x0000314c17a4ba19 #ESGvTw [Baseline](Cell[JSLexicalEnvironment ID: 111]: 0x10a050d00, Cell[Function ID: 41]: 0x10952d4b0)
    frame #4: 0x0000314c17830b02 #En2Bht [Baseline](Cell[Object ID: 3910]: 0x10a0ae680, Cell[Function ID: 41]: 0x10952d4b0)
    frame #5: 0x0000314c17a262e9 #CQ5ovJ [Baseline](Cell[JSDOMWindowShell ID: 373]: 0x106843fd0)
    frame #6: 0x0000314c17401b1a #Dzkhgo [Baseline](Cell[JSDOMWindowShell ID: 373]: 0x106843fd0)
    frame #7: 0x0000314c17972186 #COKpbW [Baseline](Cell[JSLexicalEnvironment ID: 111]: 0x10952d840, Undefined, Cell[Function ID: 41]: 0x10952ccd0)
    frame #8: 0x0000314c17849b26 #AaqAED [DFG](Cell[JSLexicalEnvironment ID: 111]: 0x10f14bad0, Undefined, 0, Cell[Function ID: 41]: 0x10952ccd0)
    frame #9: 0x0000314c17849ba6 iterate#BzjxxE [DFG](Cell[JSLexicalEnvironment ID: 111]: 0x10f14bad0)
    frame #10: 0x0000314c179dc206 eachOfSeries#EdBBeZ [DFG](Cell[Object ID: 3910]: 0x10a0ae680, Cell[Array ID: 164]: 0x1069336b0, Cell[Function ID: 41]: 0x10952d510, Cell[Function ID: 41]: 0x10952d8a0)
    frame #11: 0x0000314c176fc94d eachSeries#AbA51N [DFG](Cell[Object ID: 3910]: 0x10a0ae680, Cell[Array ID: 164]: 0x1069336b0, Cell[Function ID: 41]: 0x10952d930, Cell[Function ID: 41]: 0x10952d8a0)
    frame #12: 0x0000314c179c0ce5 end#ALiYjz [DFG](Cell[JSLexicalEnvironment ID: 111]: 0x10b99f700, Null, Cell[Object ID: 2084]: 0x10b99f640)
    frame #13: 0x0000314c179332a6 #BNl1x5 [Baseline](Cell[JSLexicalEnvironment ID: 111]: 0x10b99f6c0, Null, Cell[Object ID: 2085]: 0x10b99f680)
    frame #14: 0x0000314c17910606 onreadystatechange#BSiYKw [Baseline](Cell[XMLHttpRequest ID: 482]: 0x106843820, Cell[Event ID: 3395]: 0x10dc5f8c0)
    frame #15: 0x00007fff973eaad9 JavaScriptCore`vmEntryToJavaScript + 326
    frame #16: 0x00007fff973187c9 JavaScriptCore`JSC::JITCode::execute(JSC::VM*, JSC::ProtoCallFrame*) + 169
    frame #17: 0x00007fff96ef63dd JavaScriptCore`JSC::Interpreter::executeCall(JSC::ExecState*, JSC::JSObject*, JSC::CallType, JSC::CallData const&, JSC::JSValue, JSC::ArgList const&) + 493
    frame #18: 0x00007fff97092f37 JavaScriptCore`JSC::call(JSC::ExecState*, JSC::JSValue, JSC::CallType, JSC::CallData const&, JSC::JSValue, JSC::ArgList const&, WTF::NakedPtr<JSC::Exception>&) + 71
    frame #19: 0x00007fff8d1dc78a WebCore`WebCore::JSEventListener::handleEvent(WebCore::ScriptExecutionContext*, WebCore::Event*) + 1002
    frame #20: 0x00007fff8d616d4b WebCore`WebCore::EventTarget::fireEventListeners(WebCore::Event*, WebCore::EventTargetData*, WTF::Vector<WebCore::RegisteredEventListener, 1ul, WTF::CrashOnOverflow, 16ul>&) + 635
    frame #21: 0x00007fff8d0e2ea0 WebCore`WebCore::EventTarget::fireEventListeners(WebCore::Event*) + 224
    frame #22: 0x00007fff8d1d9f9d WebCore`WebCore::EventTarget::dispatchEvent(WTF::PassRefPtr<WebCore::Event>) + 93
    frame #23: 0x00007fff8d1d9eda WebCore`WebCore::XMLHttpRequestProgressEventThrottle::dispatchEvent(WTF::PassRefPtr<WebCore::Event>) + 154
    frame #24: 0x00007fff8d1d9df8 WebCore`WebCore::XMLHttpRequestProgressEventThrottle::dispatchReadyStateChangeEvent(WTF::PassRefPtr<WebCore::Event>, WebCore::ProgressEventAction) + 56
    frame #25: 0x00007fff8d1d9be8 WebCore`WebCore::XMLHttpRequest::callReadyStateChangeListener() + 168
    frame #26: 0x00007fff8d1fbd94 WebCore`WebCore::XMLHttpRequest::didFinishLoading(unsigned long, double) + 340
    frame #27: 0x00007fff8d172bb9 WebCore`WebCore::CachedResource::checkNotify() + 153
    frame #28: 0x00007fff8d43a803 WebCore`WebCore::CachedRawResource::finishLoading(WebCore::SharedBuffer*) + 227
    frame #29: 0x00007fff8d172a51 WebCore`WebCore::SubresourceLoader::didFinishLoading(double) + 1153
    frame #30: 0x00007fff9409ea89 WebKit`WebKit::WebResourceLoader::didReceiveWebResourceLoaderMessage(IPC::Connection&, IPC::MessageDecoder&) + 561
    frame #31: 0x00007fff93edbf56 WebKit`IPC::Connection::dispatchMessage(std::__1::unique_ptr<IPC::MessageDecoder, std::__1::default_delete<IPC::MessageDecoder> >) + 102
    frame #32: 0x00007fff93ede482 WebKit`IPC::Connection::dispatchOneMessage() + 114
    frame #33: 0x00007fff974d6ae5 JavaScriptCore`WTF::RunLoop::performWork() + 437
    frame #34: 0x00007fff974d71c2 JavaScriptCore`WTF::RunLoop::performWork(void*) + 34
    frame #35: 0x00007fff92f09881 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17
    frame #36: 0x00007fff92ee8fbc CoreFoundation`__CFRunLoopDoSources0 + 556
    frame #37: 0x00007fff92ee84df CoreFoundation`__CFRunLoopRun + 927
    frame #38: 0x00007fff92ee7ed8 CoreFoundation`CFRunLoopRunSpecific + 296
    frame #39: 0x00007fff892a5935 HIToolbox`RunCurrentEventLoopInMode + 235
    frame #40: 0x00007fff892a576f HIToolbox`ReceiveNextEventCommon + 432
    frame #41: 0x00007fff892a55af HIToolbox`_BlockUntilNextEventMatchingListInModeWithFilter + 71
    frame #42: 0x00007fff8e782efa AppKit`_DPSNextEvent + 1067
    frame #43: 0x00007fff8e78232a AppKit`-[NSApplication _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 454
    frame #44: 0x00007fff8e776e84 AppKit`-[NSApplication run] + 682
    frame #45: 0x00007fff8e74046c AppKit`NSApplicationMain + 1176
    frame #46: 0x00007fff97f8645e libxpc.dylib`_xpc_objc_main + 793
    frame #47: 0x00007fff97f84e8a libxpc.dylib`xpc_main + 494
    frame #48: 0x000000010289cb4a com.apple.WebKit.WebContent`___lldb_unnamed_function1$$com.apple.WebKit.WebContent + 16
    frame #49: 0x00007fff9c08e5ad libdyld.dylib`start + 1
    frame #50: 0x00007fff9c08e5ad libdyld.dylib`start + 1

this.provider.sendAsync is not a function

i keep getting that error and it seems to be emitting from the start() function:

bundle.js:54318 Uncaught TypeError: this.provider.sendAsync is not a function
at Web3Subprovider.handleRequest (bundle.js:54318)
at next (bundle.js:53398)
at Web3ProviderEngine._handleAsync (bundle.js:53385)
at Web3ProviderEngine._fetchBlock (bundle.js:53494)
at Web3ProviderEngine._fetchLatestBlock (bundle.js:53470)
at Web3ProviderEngine._startPolling (bundle.js:53447)
at Web3ProviderEngine.start (bundle.js:53341)
at window.onload (bundle.js:61042)
Web3Subprovider.handleRequest @ bundle.js:54318
next @ bundle.js:53398
Web3ProviderEngine._handleAsync @ bundle.js:53385
Web3ProviderEngine._fetchBlock @ bundle.js:53494
Web3ProviderEngine._fetchLatestBlock @ bundle.js:53470
Web3ProviderEngine._startPolling @ bundle.js:53447
Web3ProviderEngine.start @ bundle.js:53341
window.onload @ bundle.js:61042

Cache Improvement

  • breakout into separate file
  • handle in-flight caching ( subsequent requests while actively handling request )
  • add cache "roll-off" with option for number of blocks before rolloff
  • handle fork-level caching ( requires fork-detection #4 )

Caching - handle in-flight caching

multiple requests for the same thing should only result in one network request, even if the first request hasnt completed its network i/o yet

ethereumjs-vm constructor problem with 'null' parameter

Hi. Getting the error below on a build that uses testrpc and ran OK a day ago:

Error: TypeError: Cannot read property 'state' of null
    at new VM (/usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/node_modules/ethereumjs-vm/lib/index.js:31:15)
    at VmSubprovider.runVm (/usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/subproviders/vm.js:84:22)
    at VmSubprovider.handleRequest (/usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/subproviders/vm.js:47:8)
    at next (/usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/index.js:95:18)
    at /usr/local/lib/node_modules/ethereumjs-testrpc/lib/subproviders/gethdefaults.js:31:7
    at Web3ProviderEngine._inspectResponseForNewBlock (/usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/index.js:231:12)
    at /usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/index.js:131:14
    at /usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/node_modules/async/dist/async.js:356:16
    at replenish (/usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/node_modules/async/dist/async.js:877:25)
    at iterateeCallback (/usr/local/lib/node_modules/ethereumjs-testrpc/node_modules/web3-provider-engine/node_modules/async/dist/async.js:867:17)

I think this is because of changes to ethereumjs-vm's constructor which now looks like this:

function VM (opts = {}) {
  this.stateManager = new StateManager({
    trie: opts.state,
    blockchain: opts.blockchain
  })

causing this line in subproviders/vm.js to stop working

// create vm with state lookup intercepted
var vm = self.vm = new VM(null, null, {
  enableHomestead: true
})

Unspecified block incorrect result

╭─kumavis@xyzs-MacBook-Pro  ~   
╰─$ curl -d '{"jsonrpc":"2.0","method":"eth_getBalance","params": ["0x92172d94d7e1c196177afee2e61c85164f81b762"], "id":1}' -X POST https://testrpc.metamask.io 
{"id":1,"jsonrpc":"2.0","result":"0x0"}%                                                                                                                                             

╭─kumavis@xyzs-MacBook-Pro  ~   
╰─$ curl -d '{"jsonrpc":"2.0","method":"eth_getBalance","params": ["0x92172d94d7e1c196177afee2e61c85164f81b762"], "id":1}' -X POST https://rawtestrpc.metamask.io
{"id":1,"jsonrpc":"2.0","result":"0x013f306a2409fc0000"}

nonce tracker - nonces too far ahead

somehow nickdodson managed to get his nonces too high -- a few higher than his current txCount

so we get the new nonce when a tx is signed but we dont check the result of that signed tx to make sure the node did not respond with an error. i think thats how it got out of sync.

'Validator' provider

A validator is a (sub)provider, which transmits all queries to two (or more) (sub)providers, compares the results, returns an error if they don't match, otherwise transmits the actual result.

That it is the basic concept, however there are many edge cases to be considered. For one: what happens if a legit transaction was processed and the two endpoints don't have the same processing speed?

Even sendRawTransaction should be possible to be sent to multiple endpoints given the identical nonce will ensure it is only executed once.

Reduce bundle size

My bundled file grew by ~5 Mb once I added the provider engine as a dependency. That's unminified, but still once I minify it it's roughly 4 Mb. That's way too big for a browser app -- for mobile anyway.

I think there are some dependencies we can remove. Here are some quick suggestions:

  • async - nicety, but we don't need the whole library for the little we use it.
  • ethereumjs-utils - it's used for minor hex stripping and conversion to buffers. We can get buy without it for most cases.

Will look for more.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.