Git Product home page Git Product logo

hardhat-deploy's Introduction

hardhat-deploy

A Hardhat Plugin For Replicable Deployments And Easy Testing

A complete dev template using hardhat-deploy is available here: https://github.com/wighawag/template-ethereum-contracts It also contains various branches exemplifying the capability of hardhat-deploy. Check it out.

What is it for?

This hardhat plugin adds a mechanism to deploy contracts to any network, keeping track of them and replicating the same environment for testing.

It also adds a mechanism to associate names to addresses, so test and deployment scripts can be reconfigured by simply changing the address a name points to, allowing different configurations per network. This also results in much clearer tests and deployment scripts (no more accounts[0] in your code).

This plugin contains a lot more features too, all geared toward a better developer experience :

  • chain configuration export
  • listing deployed contracts' addresses and their abis (useful for web apps)
  • library linking at the time of deployment.
  • deterministic deployment across networks.
  • support for specific deploy script per network (L1 vs L2 for example)
  • ability to access deployment from "companion" networks
  • deployment dependency system (allowing you to only deploy what is needed).
  • deployment retrying (by saving pending tx): so you can feel confident when making a deployment that you can always recover.
  • deployments as test fixture using evm_snapshot to speed up testing.
  • ability to create your own test fixture that automatically benefits from evm_snapshot's tests speed-up boost
  • combined with hardhat-deploy-ethers it has the ability to get ethers contract instance by name (like await ethers.getContract("ContractName")).
  • importing artifacts from external sources (like npm packages), including truffle support.
  • importing deployments from external sources (like npm packages)
  • ability to log information in deploy mode only (while in test the console remains clean).
  • contains helpers to read and execute transaction on deployed contract referring to them by name.
  • These helpers contains options to auto-mine on dev network (to speed up test deployments).
  • save metadata of deployed contract so they can always be fully verified, via sourcify or etherscan.
  • ability to submit contract source to etherscan and sourcify for verification at any time. (Because hardhat-deploy will save all the necessary info, it can be executed at any time.)
  • support hardhat's fork feature so deployment can be accessed even when run through fork.
  • named accounts are automatically impersonnated too, so you can perform tx as if you had their private key.
  • proxy deployment with ability to upgrade them transparently, only if code changes.
  • this include support for openzeppelin transparent proxies
  • diamond deployment with facets, allowing you to focus on what the new version will be. It will generate the diamondCut necessary to reach the new state.
  • watch and deploy: hardhat-deploy can watch both your deploy script and contract code and redeploy on changes.
  • HCR (Hot Contract Replacement): the watch feature combined with proxy or diamond, gives you an experience akin to frontend Hot Module Replacement: once your contract changes, the deployment is executed and your contract retains the same address and same state, allowing you to tweak your contracts while debugging your front-end.

hardhat-deploy in a nutshell

Before going into the details, here is a very simple summary of the basic feature of hardhat-deploy.

hardhat-deploy allows you to write deploy scripts in the deploy folder. Each of these files that look as follows will be executed in turn when you execute the following task: hardhat --network <networkName> deploy

// deploy/00_deploy_my_contract.js
module.exports = async ({getNamedAccounts, deployments}) => {
  const {deploy} = deployments;
  const {deployer} = await getNamedAccounts();
  await deploy('MyContract', {
    from: deployer,
    args: ['Hello'],
    log: true,
  });
};
module.exports.tags = ['MyContract'];

Furthermore you can also ensure these scripts are executed in test too by calling await deployments.fixture(['MyContract']) in your test. This is optimized, so if multiple tests use the same contract, the deployment will be executed once and each test will start with the exact same state.

This is a huge benefit for testing since you are not required to replicate the deployment procedure in your tests. The tag feature (as seen in the script above) and dependencies will also make your life easier when writing complex deployment procedures.

You can even group deploy scripts in different sub-folders and ensure they are executed in their logical order.

Furthermore hardhat-deploy can also support a multi-chain settings like L1, L2 with multiple deploy folder specific to each network.

All of this can also be bundled in a npm package so users of hardhat-deploy can reuse your deployment procedure and get started integrating with your project locally.

There is a tutorial covering the basics here: https://github.com/wighawag/tutorial-hardhat-deploy

Installation

npm install hardhat-deploy

npm install -D hardhat-deploy

And add the following statement to your hardhat.config.js:

require('hardhat-deploy');

if you use ethers.js we recommend you also install hardhat-deploy-ethers which add extra features to access deployments as ethers contract.

npm install --save-dev  @nomiclabs/hardhat-ethers hardhat-deploy-ethers ethers

More details on hardhat-deploy-ethers repo: https://github.com/wighawag/hardhat-deploy-ethers#readme

TypeScript support

With hardhat the tsconfig.json is optional.

But if you add folders to the include field in tsconfig.json, you ll also need to include hardhat.config.ts like :

"include": ["./hardhat.config.ts", "./scripts", "./deploy", "./test"]

for deploy script (see below) you can write them this way to benefit from typing :

import {HardhatRuntimeEnvironment} from 'hardhat/types';
import {DeployFunction} from 'hardhat-deploy/types';

const func: DeployFunction = async function (hre: HardhatRuntimeEnvironment) {
  // code here
};
export default func;

See a template that uses hardhat-deploy here: https://github.com/wighawag/template-ethereum-contracts

This repo has also some examples branch that exemplify specific features, like the forking testing here: https://github.com/wighawag/template-ethereum-contracts/tree/examples/fork-test

Migrating existing deployment to hardhat-deploy

Only needed for an existing project that already deployed contracts and has the deployment information available (at minimum, address and abi)

You might want to switch your current deployment process to use hardhat-deploy. In that case you probably have some deployments saved elsewhere.

In order to port them to hardhat-deploy, you'll need to create one .json file per contract in the deployments/<network> folder (configurable via paths config).

The network folder is simply the hardhat network name (as configured in hardhat.config.js) (accessible at runtime via hre.network.name). Such folder need to have a file named .chainId containing the chainId as decimal.

For example for a network named "rinkeby" (for the corresponding network) the file deployments/rinkeby/.chainId would be

4

Note, prior to hardhat 0.6 the chainId was appended to the folder name (expect for some known network name). This has changed and upgrading to 0.6 will require you to change the folder name and add the '.chainId' file.

Each contract file must follow this type (as defined in types.ts) :

export interface Deployment {
  address: Address;
  abi: ABI;
  receipt?: Receipt;
  transactionHash?: string;
  history?: Deployment[];
  numDeployments?: number;
  implementation?: string;
  args?: any[];
  linkedData?: any;
  solcInputHash?: string;
  metadata?: string;
  bytecode?: string;
  deployedBytecode?: string;
  libraries?: Libraries;
  userdoc?: any;
  devdoc?: any;
  methodIdentifiers?: any;
  diamondCut?: FacetCut[];
  facets?: Facet[];
  storageLayout?: any;
  gasEstimates?: any;
}

As you can see, only abi and address are mandatory. But having the other fields allow more feature. For example, metadata and args allow you to benefit from contract code verification.

For Receipt, the following type is expected:

export type Receipt = {
  from: string;
  transactionHash: string;
  blockHash: string;
  blockNumber: number;
  transactionIndex: number;
  cumulativeGasUsed: string;
  gasUsed: string;
  contractAddress?: string;
  to?: string;
  logs?: Log[];
  events?: any[];
  logsBloom?: string;
  byzantium?: boolean;
  status?: number;
  confirmations?: number;
};

Here is an example:

Let's say you have:

  • 2 Contract named Greeter and Registry deployed on rinkeby
  • 1 contract named Greeter on mainnet
  • 2 Contract named Greeter and Registry deployed on a network named rinkeby2

You would get the following folder structure:

deployments/
  mainnet/
    .chainId
    Greeter.json
  rinkeby/
    .chainId
    Greeter.json
    Registry.json
  rinkeby2/
    .chainId
    Greeter.json
    Registry.json

The reason why hardhat-deploy save chainId in the .chainId file is both for

  • safety: so that if you were to change the network name to point to a different chain, it would not attempt to read the wrong folder and assume that a contract has been deployed while it has not.
  • ability to know the chainId without requiring to be connected to a node (and so not dependent on hardhat.config.js settings). Useful for export task.

Hardhat Tasks Available/Updated

hardhat deploy adds several tasks to hardhat. It also modifies existing one, adding new options and new behavior. All of these are described here:


1. hardhat deploy


This plugin adds the deploy task to Hardhat.

This task will execute the scripts in the deploy folder and save the contract deployments to disk. These deployments are supposed to be saved for example in a git repository. This way they can be accessed later. But you are free to save them elsewhere and get them back via your mechanism of choice.

With the deployment saved, it allows you to deploy a contract only if changes were made.

Deploy scripts (also called Deploy functions) can also perform arbitrary logic.

For further details on how to use it and write deploy script, see section below.

Options

--export <filepath>: export one file that contains all contracts (address, abi + extra data) for the network being invoked. The file contains the minimal information so to not bloat your front end.

--export-all <filepath>: export one file that contains all contracts across all saved deployments, regardless of the network being invoked.

--tags <tags>: only execute deploy scripts with the given tags (separated by commas) and their dependencies (see more info here about tags and dependencies)

--gasprice <gasprice>: specify the gasprice (in wei) to use by default for transactions executed via hardhat-deploy helpers in deploy scripts

--write <boolean>: default to true (except for hardhat network). If true, write deployments to disk (in deployments path, see path config).

Flags

--reset: This flag resets the deployments from scratch. Previously deployed contracts are not considered and deleted from disk.

--silent: This flag removes hardhat-deploy log output (see log function and log options for hre.deployments)

--watch: This flag make the task never-ending, watching for file changes in the deploy scripts folder and the contract source folder. If any changes happen the contracts are recompiled and the deploy script are re-run. Combined with a proxy deployment (Proxies or Diamond) this allow to have HCR (Hot Contract Replacement).


2. hardhat node


This plugin modifies the node task so that it also executes the deployment script before exposing the server http RPC interface

It adds similar options than the deploy task :

Options

--export <filepath>: export one file that contains all contracts (address, abi + extra data) for the network being invoked. The file contains the minimal information so to not bloat your front end. If the extension ends in .ts it will generate a typescript file containing the contracts info.

--export-all <filepath>: export one file that contains all contracts across all saved deployment, regardless of the network being invoked. If the extension ends in .ts it will generate a typescript file containing the contracts info.

--tags <tags>: only excutes deploy scripts with the given tags (separated by commas) and their dependencies (see more info here about tags and dependencies)

--gasprice <gasprice>: specify the gasprice to use by default for transactions executed via hardhat-deploy helpers in deploy scripts

--write <boolean>: default to true (except for hardhat network). If true, write deployments to disk (in deployments path, see path config).

Flags

--no-reset: This flag prevents the resetting of the existing deployments. This is usually not desired when running the node task as a network is created from scratch and previous deployments are irrelevant.

--silent: This flag removes hardhat-deploy log output (see log function and log options for hre.deployments)

--watch: This flag makes the task never-ending, watching for file changes in the deploy scripts folder and the contract source folder. If any changes happen the contracts are recompiled and the deploy script are re-run. Combined with a proxy deployment (Proxies or Diamond) this allows to have HCR (Hot Contract Replacement).

--no-deploy that discard all other options to revert to normal hardhat node behavior without any deployment being performed.

⚠️ Note that the deployments are saved as if the network name is localhost. This is because hardhat node is expected to be used as localhost: You can for example execute hardhat --network localhost console after node is running. Doing hardhat --network hardhat console would indeed not do anything useful. It still takes the configuration from hardhat in the hardhat.config.js file though.


3. hardhat test


This plugin adds a flag argument --deploy-fixture to the test task which if enabled will run the global deployments fixture before the tests and snapshots it. This will generally speed up the tests as further test will be able to revert back to the full deployment.

⚠️ Note though that if your test behave differently whether that option is on or not, this most likely means that your deploy scripts' tags and dependencies are not configured correctly. This is because the global fixture will ensure all contracts are deployed while test will usually (for efficiency) ask for a particular tag.


4. hardhat etherscan-verify


This plugin adds the etherscan-verify task to Hardhat.

This task will submit the contract source and other info of all deployed contracts to allow etherscan to verify and record the sources.

Instead of using the full solc input, this task will first attempt to send the minimal sources from the metadata. But Etherscan sometime fails due to a bug in solidity compiler (ethereum/solidity#9573). As such this task can fallback on full solc input (see option --solc-input). Note that if your contract was deployed with a previous version of hardhat-deploy, it might not contains the full information. The issue seems to be fully resolved since solc version 0.8.

This task will also attempt to automatically find the SPDX license in the source.

To execute that task, you need to specify the network to run against :

hardhat --network mainnet etherscan-verify [--api-key <etherscan-apikey>] [--api-url <url>]

Options

Note that hardhat-deploy now use a different config format to not conflict with hardhat-etherscan

--api-key <api key>: let you specify your etherscan api key. Alternatively, you can provide it via the env variable ETHERSCAN_API_KEY or through the hardhat.config.ts verify field:

{
  ...
  verify: {
    etherscan: {
      apiKey: '<API key>'
    }
  }
}

Keep in mind that the ETHERSCAN_API_KEY .env variable is read first, before the hardhat.config.ts.
If you want to set up multi-network api key support, you can do it by adding an env loader that use .env. files and set ETHERSCAN_API_KEY for each network this way.
Alternatively, you can change the mainnet etherscan api key env var name to something other than ETHERSCAN_API_KEY, and specify the other network keys as specified above.

--api-url <url>: let you specify your etherscan url to submit the source to. Can also be configured per network in hardhat.config.js:

{
  ...
  networks: {
    mynetwork: {
      ...
      verify: {
        etherscan: {
          apiUrl: 'https://api-testnet.ftmscan.com',
          apiKey: process.env.ETHERSCAN_API_KEY_FANTOM
        }
      }
    }
  }
}

NOTE: some projects use apiUrl like https://api-testnet.ftmscan.com/api, but we should remove /api path here, just fill in https://api-testnet.ftmscan.com

--license <SPDX license id>: SPDX license (useful if SPDX is not listed in the sources), need to be supported by etherscan: https://etherscan.io/contract-license-types

--force-license: if set, will force the use of the license specified by --license option, ignoring the one in the source (useful for license not supported by etherscan)

--solc-input: fallback on solc-input id needed (useful when etherscan fails on the minimum sources, see ethereum/solidity#9573)

--sleep: sleep 500ms between each verification, so API rate limit is not exceeded


5. hardhat sourcify


This plugin adds the sourcify task to Hardhat.

Similar to hardhat etherscan-verify this task will submit the contract source and other info of all deployed contracts to sourcify.

hardhat --network mainnet sourcify

Later this task might instead pin the metadata to ipfs, so sourcify can automatically verify them.

Options

--contract-name <contract name>: specify the contract's name you want to verify

--endpoint <endpoint>: specify the sourcify endpoint, default to https://sourcify.dev/server/

--write-failing-metadata: if set and the sourcify task fails to verify, the metadata file will be written to disk, so you can more easily figure out what has gone wrong.


6. hardhat export


This plugin adds the export task to Hardhat.

This task will export the contract deployed (saved in deployments folder) to a file with a simple format containing only contract addresses and abi, useful for web apps.

One of the following options need to be set for this task to have any effects :

Options

--export <filepath>: export one file that contains all contracts (address, abi + extra data) for the network being invoked. The file contains the minimal information so to not bloat your front end. If the extension ends in .ts it will generate a typescript file containing the contracts info.

--export-all <filepath>: export one file that contains all contracts across all saved deployment, regardless of the network being invoked. If the extension ends in .ts it will generate a typescript file containing the contracts info.

This last option has some limitations, when combined with the use of external deployments (see Configuration). If such external deployments were using older version of hardhat-deploy or truffle, the chainId might be missing. In order for these to be exported, the hardhat network config need to explicitly state the chainId in the networks config of hardhat.config.js.

With both --export and --export-all, using the special <filepath> value of - will output to STDOUT rather than writing a normal file.



Hardhat Environment Extensions

This plugin extends the Hardhat Runtime Environment by adding 4 fields:

  • getNamedAccounts: () => Promise<{ [name: string]: string }>: a function returning an object whose keys are names and values are addresses. It is parsed from the namedAccounts configuration (see Configuration).

  • getUnnamedAccounts: () => Promise<string[]}>: accounts which has no names, useful for test where you want to be sure that the account is not one of the predefined one

  • deployments: contains functions to access past deployments or to save new ones, as well as helpers functions.

  • getChainId(): Promise<string>: offer an easy way to fetch the current chainId.



Configuration


1. namedAccounts (ability to name addresses)


This plugin extends the HardhatConfig's object with an optional namedAccounts field.

namedAccounts allows you to associate names to addresses and have them configured per chain. This allows you to have meaningful names in your tests while the addresses match to multi sig in real network for example.

{
    namedAccounts: {
        deployer: {
            default: 0, // here this will by default take the first account as deployer
            1: 0, // similarly on mainnet it will take the first account as deployer. Note though that depending on how hardhat network are configured, the account 0 on one network can be different than on another
            4: '0xA296a3d5F026953e17F472B497eC29a5631FB51B', // but for rinkeby it will be a specific address
            "goerli": '0x84b9514E013710b9dD0811c9Fe46b837a4A0d8E0', //it can also specify a specific netwotk name (specified in hardhat.config.js)
        },
        feeCollector:{
            default: 1, // here this will by default take the second account as feeCollector (so in the test this will be a different account than the deployer)
            1: '0xa5610E1f289DbDe94F3428A9df22E8B518f65751', // on the mainnet the feeCollector could be a multi sig
            4: '0xa250ac77360d4e837a13628bC828a2aDf7BabfB3', // on rinkeby it could be another account
        }
    }
}

2. extra hardhat.config networks' options


hardhat-deploy add 5 new fields to networks configuration

live

this is not used internally but is useful to perform action on a network whether it is a live network (rinkeby, mainnet, etc) or a temporary one (localhost, hardhat). The default is true (except for localhost and hardhat where the default is false).

saveDeployments

this tell whether hardhat-deploy should save the deployments to disk or not. Default to true, except for the hardhat network.

tags

network can have tags to represent them. The config is an array and at runtime the hre.network.tags is an object whose fields (the tags) are set to true.

This is useful to conditionally operate on network based on their use case.

Example:

{
  networks: {
    localhost: {
      live: false,
      saveDeployments: true,
      tags: ["local"]
    },
    hardhat: {
      live: false,
      saveDeployments: true,
      tags: ["test", "local"]
    },
    rinkeby: {
      live: true,
      saveDeployments: true,
      tags: ["staging"]
    }
  }
}

deploy

the deploy field overrides the paths.deploy option and let you define a set of folder containing the deploy scripts to be executed for this network.

You can thus have one network that will be executing L1 deployment and other L2 deployments, etc...

You could also have a folder that deploy contracts that are live on mainnet but that you need to replicate for your test or local network.

Example:

{
  networks: {
    mainnet: {
      deploy: [ 'deploy/' ]
    },
    rinkeby: {
      deploy: [ 'testnet-deploy/' ]
    }
  }
}

companionNetworks

the companionNetworks field is an object whose key is any name you desire and the value is the name of a network that will be accessible inside the deploy script. For example:

{
  ...
  networks: {
    optimism: {
      url: 'http://127.0.0.1:8545',
      ovm: true,
      companionNetworks: {
        l1: 'localhost',
      },
    }
  }
  ...
}

By using name you can have the same deploy script used in different set of network.

For your test you could have the companion networks pointing to the same hardhat network, for test deployment, you could have rinkeby acting like your l2 while goerli act as your l1.

An example repo that showcase a multi-network setup with optimism can be found here: https://github.com/wighawag/template-ethereum-contracts/tree/examples/optimism

deploy script can then access the network and its deployment as follow :

module.exports = async ({
  getNamedAccounts,
  deployments,
  getChainId,
  getUnnamedAccounts,
}) => {
  const {deploy,execute} = deployments;
  const {deployer} = await getNamedAccounts();

  const OVM_L1ERC20Gateway = await hre.companionNetworks['l1'].deployments.get(
    'OVM_L1ERC20Gateway'
  ); // layer 1

  await execute(
    'SimpleERC20_OVM',
    {from: deployer, log: true},
    'init',
    OVM_L1ERC20Gateway.address
  );
};

3. extra hardhat.config paths' options


hardhat-deploy also adds fields to HardhatConfig's ProjectPaths object.

Here is an example showing the default values :

{
    paths: {
        deploy: 'deploy',
        deployments: 'deployments',
        imports: 'imports'
    }
}

The deploy folder is expected to contain the deploy scripts that are executed upon invocation of hardhat deploy or hardhat node. It can also be an array of folder path.

The deployments folder will contain the resulting deployments (contract addresses along their abi, bytecode, metadata...). One folder per network and one file per contract.

The imports folder is expected to contain artifacts that were pre-compiled. Useful if you want to upgrade to a new solidity version but want to keep using previously compiled contracts. The artifact is the same format as normal hardhat artifact, so you can easily copy them over, before switching to a new compiler version.

This is less useful now that hardhat support multiple solidity compiler at once.



4. deterministicDeployment (ability to specify a deployment factory)


This plugin extends the HardhatConfig's object with an optional deterministicDeployment field.

deterministicDeployment allows you to associate information that are used on each network for deterministic deployment. The information for each deterministic deployment consist out of a factory, a deployer, the required funding and a signedTx to deploy the factory. The default deterministic deployment used is the Deterministic Deployment Proxy. The factory expects a 32 bytes salt concatenated with the deployment data (see EIP-1014 for more information on these parameters).

Using the deterministicDeployment it is possible to define a different setup for the deterministic deployment. One use case for this is the deterministic deployment on networks that required replay protection (such as Celo or Avalanche). The Deterministic Deployment Proxy can only be deployed on networks that don't enforce replay protection, therefore on other networks an alternative library has to be used. An example for this would be the Safe Singleton Factory that is an adjusted version of the Deterministic Deployment Proxy that contains signed transactions that include replay protection.

The information can be defined either as an object

{
    deterministicDeployment: {
      "4": {
        factory: "<factory_address>",
        deployer: "<deployer_address>",
        funding: "<required_funding_in_wei>",
        signedTx: "<raw_signed_tx>",
      }
    }
}

or as a function that returns the information for the deterministic deployment

{
  deterministicDeployment: (network: string) => {
    return {
      factory: '<factory_address>',
      deployer: '<deployer_address>',
      funding: '<required_funding_in_wei>',
      signedTx: '<raw_signed_tx>',
    };
  };
}

Importing deployment from other projects (with truffle support)

hardhat-deploy also add the external field to HardhatConfig

Such field allows to specify paths for external artifacts or deployments.

The external object has 2 fields:

{
    external: {
        contracts: [
          {
            artifacts: "node_modules/@cartesi/arbitration/export/artifacts",
            deploy: "node_modules/@cartesi/arbitration/export/deploy"
          },
          {
            artifacts: "node_modules/someotherpackage/artifacts",
          }
        ],
        deployments: {
          rinkeby: ["node_modules/@cartesi/arbitration/build/contracts"],
        },
    }
}

The contracts field specify an array of object which itself have 2 fields.

  • artifacts: (mandatory) it is a path to an artifact folder. This support both hardhat and truffle artifacts.
  • deploy: (optional) it specifies a path to a folder where reside deploy script. The deploy scripts have only access to the artifact specified in the artifacts field. This allow project to share their deployment procedure. A boon for developer aiming at integrating it as they can get the contracts to be deployed for testing locally.

The deployments fields specify an object whose field names are the hardhat network and the value is an array of path to look for deployments. It supports both hardhat-deploy and truffle formats.


Access to Artifacts (non-deployed contract code and abi)

Artifacts in hardhat terminology represent a compiled contract (not yet deployed) with at least its bytecode and abi.

hardhat-deploy gives can access to these artifacts via the deployments.getArtifact function :

const {deployments} = require('hardhat');
const artifact = await deployments.getArtifact(artifactName);

With the hardhat-deploy-ethers plugin you can get an artifact as an ethers contract factory, ready to be deployed, via the following:

const {deployments, ethers} = require('hardhat');
const factory = await ethers.getContractFactory(artifactName);

Note that the artifact's files need to be either in artifacts folder that hardhat generate on compilation or in the imports folder where you can store contracts compiled elsewhere. They can also be present in the folder specified in external.artifacts see Importing deployment from other projects



How to Deploy Contracts


The deploy Task

hardhat --network <networkName> deploy [options and flags]

This is a new task that the hardhat-deploy adds. As the name suggests it deploys contracts. To be exact it will look for files in the folder deploy or whatever was configured in paths.deploy, see paths config

It will scan for files in alphabetical order and execute them in turn.

  • it will require each of these files and execute the exported function with the HRE as argument

To specify the network, you can use the builtin hardhat argument --network <network name> or set the env variable HARDHAT_NETWORK

⚠️ Note that running hardhat deploy without specifying a network will use the default network. If the default network is hardhat (the default's default) then nothing will happen as a result as everything happens in memory, but this can be used to ensure the deployment is without issues.


Deploy Scripts

The deploy scripts need to be of the following type :

export interface DeployFunction {
  (env: HardhatRuntimeEnvironment): Promise<void | boolean>;
  skip?: (env: HardhatRuntimeEnvironment) => Promise<boolean>;
  tags?: string[];
  dependencies?: string[];
  runAtTheEnd?: boolean;
  id?: string;
}

The skip function can be used to skip executing the script under whatever condition. It simply need to resolve a promise to true.

The tags is a list of string that when the deploy task is executed with, the script will be executed (unless it skips). In other word if the deploy task is executed with a tag that does not belong to that script, that script will not be executed unless it is a dependency of a script that does get executed.

The dependencies is a list of tag that will be executed if that script is executed. So if the script is executed, every script whose tag match any of the dependencies will be executed first.

The runAtTheEnd is a boolean that if set to true, will queue that script to be executed after all other scripts are executed.

These set of fields allow more flexibility to organize the scripts. You are not limited to alphabetical order and you can even organise deploy script in sub folders.

Finally the function can return true if it wishes to never be executed again. This can be useful to emulate migration scripts that are meant to be executed only once. Once such script return true (async), the id field is used to track execution and if that field is not present when the script return true, it will fails.

In other words, if you want a particular deploy script to run only once, it needs to both return true (async) and have an id set.

In any case, as a general advice every deploy function should be idempotent. This is so they can always recover from failure or pending transaction. This is what underpin most of hardhat-deploy philosophy.

This is why the hre.deployments.deploy function will by default only deploy if the contract code has changed, making it easier to write idempotent script.

An example of a deploy script :

module.exports = async ({
  getNamedAccounts,
  deployments,
  getChainId,
  getUnnamedAccounts,
}) => {
  const {deploy} = deployments;
  const {deployer} = await getNamedAccounts();

  // the following will only deploy "GenericMetaTxProcessor" if the contract was never deployed or if the code changed since last deployment
  await deploy('GenericMetaTxProcessor', {
    from: deployer,
    gasLimit: 4000000,
    args: [],
  });
};

As you can see the HRE passed in has 4 new fields :

  • getNamedAccounts is a function that returns a promise to an object whose keys are names and values are addresses. It is parsed from the namedAccounts configuration (see namedAccounts).

  • getUnnamedAccounts is a function that return a promise to an array of accounts which were not named (see namedAccounts). It is useful for tests where you want to be sure that the account has no speicifc role in the system (no token given, no admin access, etc...).

  • getChainId is a function which return a promise for the chainId, as convenience

  • deployments is an object which contains functions to access past deployments or to save new ones, as well as helpers functions.

That latter field contains for example the deploy function that allows you to deploy contract and save them. It contains a lot more functions though :


The deployments field

The deployments field contains several helpers function to deploy contract but also execute transaction.

export interface DeploymentsExtension {
  deploy(name: string, options: DeployOptions): Promise<DeployResult>; // deploy a contract
  diamond: {
    // deploy diamond based contract (see section below)
    deploy(name: string, options: DiamondOptions): Promise<DeployResult>;
  };
  deterministic( // return the deterministic address as well as a function to deploy the contract, can pass the `salt` field in the option to use different salt
    name: string,
    options: Create2DeployOptions
  ): Promise<{
    address: Address;
    implementationAddress?: Address;
    deploy(): Promise<DeployResult>;
  }>;
  fetchIfDifferent( // return true if new compiled code is different than deployed contract
    name: string,
    options: DeployOptions
  ): Promise<{differences: boolean; address?: string}>;
  save(name: string, deployment: DeploymentSubmission): Promise<void>; // low level save of deployment
  get(name: string): Promise<Deployment>; // fetch a deployment by name, throw if not existing
  getOrNull(name: string): Promise<Deployment | null>; // fetch deployment by name, return null if not existing
  getDeploymentsFromAddress(address: string): Promise<Deployment[]>;
  all(): Promise<{[name: string]: Deployment}>; // return all deployments
  getArtifact(name: string): Promise<Artifact>; // return a hardhat artifact (compiled contract without deployment)
  getExtendedArtifact(name: string): Promise<ExtendedArtifact>; // return a extended artifact (with more info) (compiled contract without deployment)
  run( // execute deployment scripts
    tags?: string | string[],
    options?: {
      resetMemory?: boolean;
      deletePreviousDeployments?: boolean;
      writeDeploymentsToFiles?: boolean;
      export?: string;
      exportAll?: string;
    }
  ): Promise<{[name: string]: Deployment}>;
  fixture( // execute deployment as fixture for test // use evm_snapshot to revert back
    tags?: string | string[],
    options?: {fallbackToGlobal?: boolean; keepExistingDeployments?: boolean}
  ): Promise<{[name: string]: Deployment}>;
  createFixture<T, O>( // execute a function as fixture using evm_snaphost to revert back each time
    func: FixtureFunc<T, O>,
    id?: string
  ): (options?: O) => Promise<T>;
  log(...args: any[]): void; // log data only ig log enabled (disabled in test fixture)

  execute( // execute function call on contract
    name: string,
    options: TxOptions,
    methodName: string,
    ...args: any[]
  ): Promise<Receipt>;
  rawTx(tx: SimpleTx): Promise<Receipt>; // execute a simple transaction
  catchUnknownSigner( // you can wrap other function with this function and it will catch failure due to missing signer with the details of the tx to be executed
    action: Promise<any> | (() => Promise<any>),
    options?: {log?: boolean}
  ): Promise<null | {
    from: string;
    to?: string;
    value?: string;
    data?: string;
  }>;
  read( // make a read-only call to a contract
    name: string,
    options: CallOptions,
    methodName: string,
    ...args: any[]
  ): Promise<any>;
  read(name: string, methodName: string, ...args: any[]): Promise<any>;
  // rawCall(to: Address, data: string): Promise<any>; // TODO ?
}

deployments.deploy(<name>, options)


The deploy function, as mentioned above, allows you to deploy a contract and save it under a specific name.

The deploy function expect 2 parameters: one for the name and one for the options

See below the full list of fields that the option parameter allows and requires:

export interface DeployOptions = {
  from: string; // address (or private key) that will perform the transaction. you can use `getNamedAccounts` to retrieve the address you want by name.
  contract?: // this is an optional field. If not specified it defaults to the contract with the same name as the first parameter
    | string // this field can be either a string for the name of the contract
    | { // or abi and bytecode
        abi: ABI;
        bytecode: string;
        deployedBytecode?: string;
      };
  args?: any[]; // the list of argument for the constructor (or the upgrade function in case of proxy)
  skipIfAlreadyDeployed?: boolean; // if set it to true, will not attempt to deploy even if the contract deployed under the same name is different
  log?: boolean; // if true, it will log the result of the deployment (tx hash, address and gas used)
  linkedData?: any; // This allow to associate any JSON data to the deployment. Useful for merkle tree data for example
  libraries?: { [libraryName: string]: Address }; // This let you associate libraries to the deployed contract
  proxy?: boolean | string | ProxyOptions; // This options allow to consider your contract as a proxy (see below for more details)

  // here some common tx options :
  gasLimit?: string | number | BigNumber;
  gasPrice?: string | BigNumber;
  value?: string | BigNumber;
  nonce?: string | number | BigNumber;

  estimatedGasLimit?: string | number | BigNumber; // to speed up the estimation, it is possible to provide an upper gasLimit
  estimateGasExtra?: string | number | BigNumber; // this option allow you to add a gas buffer on top of the estimation

  autoMine?: boolean; // this force a evm_mine to be executed. this is useful to speed deployment on test network that allow to specify a block delay (ganache for example). This option basically skip the delay by force mining.
  deterministicDeployment? boolean | string; // if true, it will deploy the contract at a deterministic address based on bytecode and constructor arguments. The address will be the same across all network. It use create2 opcode for that, if it is a string, the string will be used as the salt.
  waitConfirmations?: number; // number of the confirmations to wait after the transactions is included in the chain
};


Handling contract using libraries

In the deploy function, one of the DeployOptions field is the libraries field. It allows you to associate external contract as libraries at the time of deployment.

First, you have deploy the library using the deploy function, then when we deploy a contract that needs the linked library, we can pass the deployed library name and address in as an argument to the libraries object.

First step: deploy the library:

const exampleLibrary = await deploy("ExampleLibary", {
    from: <deployer>
});

ExampleLibrary is now deployed to whatever network was chosen (hardhat deploy --network <networkName>)

For example, if we are deploying on Rinkeby, this library will get deployed on rinkeby, and the exampleLibrary variable will be a deployment object that contains the abi as well as the deployed address for the contract.

Now that the library is deployed, we can link it in our next deployed contract.

const example = await deploy("Example", {
    from: <deployer>
    args: ["example string argument for the 'Example' contract constructor"],
    libraries: {
        ExampleLibrary: exampleLibrary.address
    }
});

This libraries object takes the name of the library, and its deployed address on the network. Multiple libraries can be passed into the libraries object.



Exporting Deployments

Apart from deployments saved in the deployments folder which contains all information available about the contract (compile time data + deployment data), hardhat-deploy allows you to export lightweight files.

These can be used for example to power your frontend with contract's address and abi.

This come into 2 flavors.

The first one is exported via the --export <file> option and follow the following format :

export interface Export {
  chainId: string;
  name: string;
  contracts: {[name: string]: ContractExport};
}

where name is the name of the network configuration chosen (see hardhat option --network)

The second one is exported via the --export-all <file> option and follow the following format :

export type MultiExport = {
  [chainId: string]: Export[];
};

As you see the second format include the previous. While in most case you'll need the single export where your application will support only one network, there are case where your app would want to support multiple networks at once. This second format allow for that.

Furthermore as hardhat support multiple network configuration for the same network (rinkeby, mainnet...), the export-all format will contains each of them grouped by their chainId.

Note: from v0.10.4 the old multi-export down is no more:

export type MultiExport = {
  [chainId: string]: {[name: string]: Export};
};

For both --export and --export-all, if the extension ends in .ts it will generate a typescript file containing the contracts info.



Deploying and Upgrading Proxies

As mentioned above, the deploy function can also deploy a contract through a proxy. It can be done without modification of the contract as long as its number of constructor arguments matches the proxy initialization/update function. If the arguments do not match, see this section below.

The default Proxy is both ERC-1967 and ERC-173 Compliant, but other proxy can be specified, like openzeppelin transparent proxies.

Code for the default Proxy can be found here.

To perform such proxy deployment, you just need to invoke the deploy function with the following options : {..., proxy: true}

See example :

module.exports = async ({getNamedAccounts, deployments, getChainId}) => {
  const {deploy} = deployments;
  const {deployer} = await getNamedAccounts();
  await deploy('Greeter', {
    from: deployer,
    proxy: true,
  });
};

You can also set it to proxy: "<upgradeMethodName>" in which case the function <upgradeMethodName> will be executed upon upgrade. the args field will be then used for that function instead of the contructor. It is also possible to then have a constructor with the same arguments and have the proxy be disabled. It can be useful if you want to have your contract as upgradeable in a test network but be non-upgradeable on the mainnet.

See example :

module.exports = async ({
  getNamedAccounts,
  deployments,
  getChainId,
  network,
}) => {
  const {deploy} = deployments;
  const {deployer} = await getNamedAccounts();
  await deploy('Greeter', {
    from: deployer,
    proxy: network.live ? false : 'postUpgrade',
    args: ['arg1', 2, 3],
  });
};

The proxy option can also be an object which can set the specific owner that the proxy is going to be managed by.

See example:

module.exports = async ({getNamedAccounts, deployments, getChainId}) => {
  const {deploy} = deployments;
  const {deployer, greeterOwner} = await getNamedAccounts();
  await deploy('Greeter', {
    from: deployer,
    proxy: {
      owner: greeterOwner,
      methodName: 'postUpgrade',
    },
    args: ['arg1', 2, 3],
  });
};

Note that for the second invocation, this deployment will not be executed from the specified from: deployer as otherwise these tx will always fail. It will instead be automatically executed from the proxy's current owner (in that case : greeterOwner)

Now, it is likely you do not want to locally handle the private key / mnemonic of the account that manage the proxy or it could even be that the greeterOwner in question is a multi sig. As such that second invocation will throw an error as it cannot find a local signer for it.

The error will output the necessary information to upgrade the contract but hardhat-deploy comes also with a utility function for such case: deployments.catchUnknownSigner which will catch the error and output to the console the necessary information while continuing to next step.

Here is the full example :

module.exports = async ({getNamedAccounts, deployments, getChainId}) => {
  const {deploy, catchUnknownSigner} = deployments;
  const {deployer, greeterOwner} = await getNamedAccounts();
  await catchUnknownSigner(
    deploy('Greeter', {
      from: deployer,
      proxy: {
        owner: greeterOwner,
        methodName: 'postUpgrade',
      },
      args: ['arg1', 2, 3],
    })
  );
  // you could pause the deployment here and wait for input to continue
};

When the constructor and init functions are different

When the constructor and proxy have different signatures, you will not be able to use the top level args property. Instead you can use the execute property of proxy to specify the init method and arguments. This will not try to pass any arguments to the constructor.

const deployed = await deploy("YourContract", {
  from: deployer,
  proxy: {
    execute: {
      init: {
        methodName: "initialize",
        args: ["arg1", "arg2"],
      },
    },
    proxyContract: "OpenZeppelinTransparentProxy",
  },
  log: true,
  autoMine: true,
});

Proxy deployment options

The full proxy options is as follow:

type ProxyOptionsBase = {
  owner?: Address; // this set the owner of the proxy. further upgrade will need to be executed from that owner
  upgradeIndex?: number; // allow you to breakdown your upgrades into separate deploy script, each with their own index. A deploy call with a specific upgradeIndex will be executed only once, only if the current upgradeIndex is one less.
  proxyContract?: // default to "EIP173Proxy". See below for more details
  string | ArtifactData;
  viaAdminContract?: // allow to specify a contract that act as a middle man to perform upgrades. Useful and Recommended for Transparent Proxies
  | string
    | {
        name: string;
        artifact?: string | ArtifactData;
      };
};

export type ProxyOptions =
  | (ProxyOptionsBase & {
      methodName?: string; // method to be executed when the proxy is deployed for the first time or when the implementation is modified. Use the deployOptions args field for arguments
    })
  | (ProxyOptionsBase & {
      execute?:
        | {
            methodName: string; // method to be executed when the proxy is deployed for the first time or when the implementation is modified.
            args: any[];
          }
        | {
            init: {
              methodName: string; // method to be executed when the proxy is deployed
              args: any[];
            };
            onUpgrade?: {
              methodName: string; // method to be executed when the proxy is upgraded (not first deployment)
              args: any[];
            };
          };
    });

The proxyContract field allow you to specify your own Proxy contract. If it is a string, it will first attempt to get an artifact with that name. If not found it will fallback on the following if

it matches:

  • EIP173Proxy: use the default Proxy that is EIP-173 compliant

  • EIP173ProxyWithReceive: Same as above except that the proxy contains a receive hook to accept empty ETH payment.

  • OpenZeppelinTransparentProxy: Use Openzeppelin Transparent Proxy (copied from openzeppelin repo, see code here) When this option is chosen, the DefaultProxyAdmin is also used as admin since Transparent Proxy kind of need an intermediary contract for administration. This can be configured via the viaAdminContract option. Note that the DefaultProxyAdmin is slightly different than the one used by openzeppelin as it allow you to set a different owner than msg.sender on first deploy, something openzeppelin version do not allow, see : OpenZeppelin/openzeppelin-contracts#2639

  • OptimizedTransparentProxy: This contract is similar to above, except that it is optimized to not require storage read for the admin on every call.

Builtin-In Support For Diamonds (EIP2535)

The deployments field also expose the diamond field: hre.deployments.diamond that let you deploy Diamonds in an easy way.

deployment / upgrade

Instead of specifying the facets to cut out or cut in, which the diamond contract expects, you specify the facets you want to end up having on the deployed contract.

This declarative approach allow you to focus on what you want instead of how to do it.

diamond.deploy expect the facet as names. The names represent contract to be deployed as facet. In future version you ll be able to specify deployed contract or artifact object as facet.

To deploy a contract with 3 facet you can do as follow :

module.exports = async ({getNamedAccounts, deployments, getChainId}) => {
  const {diamond} = deployments;
  const {deployer, diamondAdmin} = await getNamedAccounts();
  await diamond.deploy('ADiamondContract', {
    from: deployer,
    owner: diamondAdmin,
    facets: ['Facet1', 'Facet2', 'Facet3'],
  });
};

if you then later execute the following script:

module.exports = async ({getNamedAccounts, deployments, getChainId}) => {
  const {diamond} = deployments;
  const {deployer, diamondAdmin} = await getNamedAccounts();
  await diamond.deploy('ADiamondContract', {
    from: diamondAdmin, // this need to be the diamondAdmin for upgrade
    owner: diamondAdmin,
    facets: ['NewFacet', 'Facet2', 'Facet3'],
  });
};

Then the NewFacet will be deployed automatically if needed and then the diamondCut will cut Facet1 out and add NewFacet.

Note that if the code for Facet2 and Facet3 changes, they will also be redeployed automatically and the diamondCuts will replace the existing facets with these new ones.

Note that the diamond has 3 facet added by default. These facets are used for ownership, diamondCut and diamond loupe.

The implementation is a slightly modified version of the reference implementation by Nick Mudge. The only difference is the custom constructor that allow multiple initialization, used to allow the default ERC165 facet to be initialised along your custom initialization function.

onUpgrade calls

Like normal proxies you can also execute a function at the time of an upgrade.

This is done by specifying the execute field in the diamond deploy options :

diamond.deploy('ADiamondContract', {
  from: deployer,
  owner: diamondAdmin,
  facets: ['NewFacet', 'Facet2', 'Facet3'],
  execute: {
    methodName: 'postUpgrade',
    args: ['one', 2, '0x3'],
  },
});

use diamond contract for your scripts

set external typechain in hardhat.config.ts, this make sure typechain knows this abi and generate typechain files for the diamond contract

const config: HardhatUserConfig = {
	typechain: {
		externalArtifacts: ['deployments/localhost/ADiamondContract.json']
	},
  ...

in your scripts

const ADiamondContract = await ethers.getContract<ADiamondContract>('ADiamondContract')
// do your stuff

more...

There are more options, to be described later...

Testing Deployed Contracts

You can continue using the usual test task:

hardhat test

Tests can use the hre.deployments.fixture function to run the deployment and snapshot it so that tests don't need to perform all the deployment transactions every time. They can simply reuse the snapshot for every test (this leverages evm_snapshot and evm_revert provided by both hardhat and ganache). You can for example set them in a beforeEach.

Here is an example of a test :

const {deployments} = require('hardhat');

describe('Token', () => {
  it('testing 1 2 3', async function () {
    await deployments.fixture(['Token']);
    const Token = await deployments.get('Token'); // Token is available because the fixture was executed
    console.log(Token.address);
    const ERC721BidSale = await deployments.get('ERC721BidSale');
    console.log({ERC721BidSale});
  });
});

Tests can also leverage named accounts for clearer test. Combined with hardhat-deploy-ethers plugin, you can write succint test :

const {ethers, getNamedAccounts} = require('hardhat');

describe('Token', () => {
  it('testing 1 2 3', async function () {
    await deployments.fixture(['Token']);
    const {tokenOwner} = await getNamedAccounts();
    const TokenContract = await ethers.getContract('Token', tokenOwner);
    await TokenContract.mint(2).then((tx) => tx.wait());
  });
});

Creating Fixtures

Furthermore, tests can easily create efficient fixture using deployments.createFixture

See example :

const setupTest = deployments.createFixture(
  async ({deployments, getNamedAccounts, ethers}, options) => {
    await deployments.fixture(); // ensure you start from a fresh deployments
    const {tokenOwner} = await getNamedAccounts();
    const TokenContract = await ethers.getContract('Token', tokenOwner);
    await TokenContract.mint(10).then((tx) => tx.wait()); //this mint is executed once and then `createFixture` will ensure it is snapshotted
    return {
      tokenOwner: {
        address: tokenOwner,
        TokenContract,
      },
    };
  }
);
describe('Token', () => {
  it('testing 1 2 3', async function () {
    const {tokenOwner} = await setupTest();
    await tokenOwner.TokenContract.mint(2);
  });
});

While this example is trivial, some fixture can require several transaction and the ability to snapshot them automatically speed up the tests greatly.



More Information On Hardhat Tasks


1. node task

as mentioned above, the node task is slightly modified and augmented with various flags and options

hardhat node

In particular it adds an argument --export that allows you to specify a destination file where the info about the contracts deployed is written. Your webapp can then access all contracts information.


2. test task

hardhat test

the test task is augmented with one flag argument --deploy-fixture that allows to run all deployments in a fixture snapshot before executing the tests. This can speed up the tests that use specific tags as the global fixture take precedence (unless specified).

In other words tests can use deployments.fixture(<specific tag>) where specific tag only deploys the minimal contracts for tests, while still benefiting from global deployment snapshot if used.

If a test needs the deployments to only include the specific deployment specified by the tag, it can use the following :

deployments.fixture('<specific tag>', {fallbackToGlobal: false});

Due to how snapshot/revert works in hardhat, this means that these tests will not benefit from the global fixture snapshot and will have to deploy their contracts as part of the fixture call. This is automatic but means that these tests will run slower.


3. run task

hardhat --network <networkName> run <script>

The run task act as before but thanks to the hre.deployments field it can access deployed contract :

const hre = require('hardhat');
const {deployments, getNamedAccounts} = hre;

(async () => {
  console.log(await deployments.all());
  console.log({namedAccounts: await getNamedAccounts()});
})();

You can also run it directly from the command line as usual.

HARDHAT_NETWORK=rinkeby node <script> is the equivalent except it does not load the hardhat environment twice (which the run task does)


4. console task

hardhat console

The same applies to the console task.


Deploy Scripts: Tags And Dependencies


It is possible to execute only specific parts of the deployments with hardhat deploy --tags <tags>

<tags> is an array of tags, separated by comma, for example hardhat deploy --tags tag1,tag2 will look for the scripts containing any of the tags tag1 or tag2

To execute only the scripts containing all the tags, add --tags-require-all flag, for example hardhat deploy --tags tag1,tag2 --tags-require-all will look for the scripts containing all of the tags tag1 and tag2

Tags represent what the deploy script acts on. In general it will be a single string value, the name of the contract it deploys or modifies.

Then if another deploy script has such tag as a dependency, then when this latter deploy script has a specific tag and that tag is requested, the dependency will be executed first.

Here is an example of two deploy scripts :

module.exports = async ({getNamedAccounts, deployments}) => {
  const {deployIfDifferent, log} = deployments;
  const namedAccounts = await getNamedAccounts();
  const {deployer} = namedAccounts;
  const deployResult = await deploy('Token', {
    from: deployer,
    args: ['hello', 100],
  });
  if (deployResult.newlyDeployed) {
    log(
      `contract Token deployed at ${deployResult.address} using ${deployResult.receipt.gasUsed} gas`
    );
  }
};
module.exports.tags = ['Token'];
module.exports = async function ({getNamedAccounts, deployments}) {
  const {deployIfDifferent, log} = deployments;
  const namedAccounts = await getNamedAccounts();
  const {deployer} = namedAccounts;
  const Token = await deployments.get('Token');
  const deployResult = await deploy('Sale', {
    from: deployer,
    contract: 'ERC721BidSale',
    args: [Token.address, 1, 3600],
  });
  if (deployResult.newlyDeployed) {
    log(
      `contract Sale deployed at ${deployResult.address} using ${deployResult.receipt.gasUsed} gas`
    );
  }
};
module.exports.tags = ['Sale'];
module.exports.dependencies = ['Token']; // this ensure the Token script above is executed first, so `deployments.get('Token')` succeeds

As you can see the second one depends on the first. This is because the second script depends on a tag that the first script registers as using.

With that when hardhat deploy --tags Sale is executed

then both scripts will be run, ensuring Sale is ready.

You can also define the script to run after another script is run by setting runAtTheEnd to be true. For example:

module.exports = async function ({getNamedAccounts, deployments}) {
  const {deployIfDifferent, execute, log} = deployments;
  const namedAccounts = await getNamedAccounts();
  const {deployer, admin} = namedAccounts;
  await execute('Sale', {from: deployer}, 'setAdmin', admin);
};
module.exports.tags = ['Sale'];
module.exports.runAtTheEnd = true;

Tags can also be used in test with deployments.fixture. This allows you to test a subset of the deploy script.

hardhat-deploy's People

Contributors

adeojoemmanuel avatar aefhm avatar aureliusbtc avatar benignmop avatar cmdallas avatar cruzdanilo avatar cryptotriv avatar dependabot[bot] avatar filiplaurentiu avatar fvictorio avatar guotie avatar huyhuynh3103 avatar ifelsedeveloper avatar itinance avatar kiriyaga avatar lbeder avatar mastyf avatar mikemcdonald avatar nataouze avatar novaknole avatar nxqbao avatar pcowgill avatar rmeissner avatar simone1999 avatar stupid-boar avatar tomafrench avatar vsevo1od avatar wanyvic avatar wighawag avatar zmalatrax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hardhat-deploy's Issues

Connect build artifacts from the external project

I have an issue with connecting the external artifacts.
In Project A I have deploy and artifacts folders where I keep all the hardhat deployment files and artifacts generated after the first deployment.
In Project B I'm re-using those things by putting that into the hardhat config:

  external: {
    contracts: [
      {
        artifacts: '/contracts/artifacts',
        deploy: '/contracts/deploy',
      },
    ],
  },

Now I run npx hardhat deploy from Project B and see that all the contracts got successfully deployed.
But when I'm trying to interact with any of the deployed contracts I see in the console: Contract call: <UnrecognizedContract>.
Same goes if I separately run the hardhat node first and deploy contracts after. For every deployment, I see in logs <UnrecognizedContract>.

What is the correct way to connect the external hardhat project, so the hardhat node could understand what contracts were deployed?

hardhat node task bug deploys to ropsten defaultNetwork

For ref see: NomicFoundation/hardhat#1139 (comment)

EDIT: copying over the bits relevant to hardhat-deploy :

Now the more serious bugs:

  • defaultNetwork: "ropsten"
    • npx hardhat node

image

As you can see I am using hardhat-deploy which overrides the hardhat node task to run deploy scripts on start. I accidentally had my defaultNetwork: set to ropsten and somehow the hardhat node task did not fail and it deployed contracts to ropsten.

I would have expected hardhat node to fail with a Hardhat error similar to Error HH605: Unsupported network for JSON-RPC server.

This works when --network ropsten is passed as a flag:

image

But apparently not if the network is configured as defaultNetwork.

Either way the main issue:

When a defaultNetwork is configured hardhat-deploy seems to deploy all contracts to that network when invoking the hardhat node command in the terminal. Even worse, it bypassed all my deploy scripts skip functions for that defaultNetwork.

Upgrading Proxy Contracts Doesn't Work with Multiple Owners (potential fix included)

Hi, I'm trying to use the library to upgrade my contracts, using Proxies. Specifically, I want to have the proxy owned by a different admin than the implementation. This is pretty much required, as OZ states in the article that you link to in the docs. You mention in the docs that if we do this though, Note that for the second invocation, this deployment will fails to upgrade the proxy as the from which is deployer is not the same as the proxy's owner : greeterOwner.

Indeed, if I try to do this, it fails. But I don't think it has to fail. In digging around the source code, I determined that I believe a two-line change could fix this. I might be going about this in a silly way, but here's what I found:

here

-    const currentOwner = await read(proxyName, "proxyAdmin");
+   const currentOwner = await read(proxyName, {from: owner}, "proxyAdmin");

and here


-                const executeReceipt = await execute(proxyName, Object.assign({}, options), "changeImplementation", implementation.address, data);
+                const executeReceipt = await execute(proxyName, Object.assign({}, options, {from: owner}), "changeImplementation", implementation.address, data);

These changes make my tests go from red to green. Without it, I get the error Error: Transaction reverted: function selector was not recognized and there's no fallback function. It seems to stem from the fact that you're trying to call the proxy contract from a different address than it's owner, which I believe means the proxy will never respond to functions that live on the proxy itself (such as proxyAdmin or changeImplementation).

The owner in the above file is already getting pulled from getProxyOwner, so it seems reasonable to use it when calling functions on the Proxy contract itself at all times anyway. Like you have the guard check that the "currentOwner" equals the "owner", but this check feels unnecessary during deployment. If someone wants to transfer ownership, I would expect that to happen as a separate migration, outside of a deploy call.

What are your thoughts on this fix? What other issues might come up? I think if we had it, proxy deployments with different owners for the proxy and the implementation would start to "just work" when you upgrade.

Thanks for your consideration! I like the library so far!

Feature Request: batch deploy transactions

It would be great to have a flag like --batch and have hardhat-deploy evaluate the total number of deployments to be done and the sequence and then send all transactions off to the network in one go with their respective sequential nonces.

Etherscan verify chokes for artifacts that don't have metadata (were compiled elsewhere)

I've switched over our deployment system to use buidler-deploy and so far it's AWESOME. I love this tool.

However, I'm having trouble using the etherscan-verify command as several of my artifacts don't have any metadata, as they were deployed using the contracts option and therefore have incomplete artifacts.

It would be really helpful to be able to pass the list of deployment names we want verified. I figure this line needs to change. Or, perhaps it can just silently ignore deployments that don't have any metadata.

Thoughts?

hardhat-deploy Error: function "facetAddress" will shadow "facetAddress". Please update code to avoid

Someone is getting this error when trying to use a diamond with hardhat:
hardhat-deploy Error: function "facetAddress" will shadow "facetAddress". Please update code to avoid

This error is happening because in DiamondLoupeFacet.sol the facetAddress function name gets shadowed with a variable.

This is fixed in the most recent versions of diamond implementations. Can the diamond implementation of hardhat-deploy be updated to the newest version?

https://github.com/mudgen/diamond-3

namedAccounts is undefined

require('dotenv').config();

usePlugin("@nomiclabs/buidler-waffle");
usePlugin("@nomiclabs/buidler-ethers");
usePlugin("@nomiclabs/buidler-etherscan");
usePlugin("buidler-deploy");
usePlugin("solidity-coverage");

const ETHERSCAN_API_KEY = process.env.ETHERSCAN || "";

module.exports = {
  namedAccounts: {
    deployer: {
      default: 0
    },
  },
  defaultNetwork: "buidlerevm",
  solc: {
    version: "0.6.6",
    optimizer: {
      runs: 200,
      enabled: true,
    }
  },
  networks: {
    buidlerevm: {
      gasPrice: 0,
      blockGasLimit: 100000000,
    },
    coverage: {
      url: 'http://127.0.0.1:8555' // Coverage launches its own ganache-cli client
    }
  },
  etherscan: {
    url: "https://api.etherscan.io/api",
    apiKey: ETHERSCAN_API_KEY
  },
  paths: {
    deploy: 'deploy',
    deployments: 'deployments'
  }
};
module.exports = async ({ namedAccounts, deployments }) => {
  const { deployIfDifferent, log } = deployments;
  console.log('**** namedAccounts ***', namedAccounts);

  const { deployer } = namedAccounts;

  console.log('**** AGENT ***', namedAccounts);

  const storage = await deployIfDifferent('data', 'EternalStorage', { from: deployer }, 'EternalStorage')

  if (storage.newlyDeployed) {
    log(`contract Storage has been deployed: ${storage.address}`);
  }
}
module.exports.tags = ['EternalStorage'];
All contracts have already been compiled, skipping compilation.
**** namedAccounts *** undefined
An unexpected error occurred:

Error: ERROR processing /mnt/f/projects/PanDAO/pandao-contracts/deploy/eternalStorage.js:
TypeError: Cannot destructure property `deployer` of 'undefined' or 'null'.
    at Object.module.exports [as func] (/mnt/f/projects/PanDAO/pandao-contracts/deploy/eternalStorage.js:5:24)
    at DeploymentsManager.runDeploy (/mnt/f/projects/PanDAO/pandao-contracts/node_modules/buidler-deploy/src/DeploymentsManager.ts:557:32)
    at SimpleTaskDefinition.config_1.internalTask.addOptionalParam.addOptionalParam.addOptionalParam.addOptionalParam.setAction [as action] (/mnt/f/projects/PanDAO/pandao-contracts/node_modules/buidler-deploy/src/index.ts:91:33)
    at Environment._runTaskDefinition (/mnt/f/projects/PanDAO/pandao-contracts/node_modules/@nomiclabs/buidler/src/internal/core/runtime-environment.ts:197:35)
    at Environment.run (/mnt/f/projects/PanDAO/pandao-contracts/node_modules/@nomiclabs/buidler/src/internal/core/runtime-environment.ts:122:17)
    at SimpleTaskDefinition.config_1.task.addOptionalParam.addOptionalParam.setAction [as action] (/mnt/f/projects/PanDAO/pandao-contracts/node_modules/buidler-deploy/src/index.ts:104:17)
    at DeploymentsManager.runDeploy (/mnt/f/projects/PanDAO/pandao-contracts/node_modules/buidler-deploy/src/DeploymentsManager.ts:560:19)

Chain id issue when deploying to rinkeby testnet

Issue

Deployments fail to rinkeby testnet, with the following error:

Error: Network name ("rinkeby") is confusing, chainId is 0x4. Was expecting 0x04
    at DeploymentsManager.getDeploymentsSubPath (/home/pax/src/buidler-deploy-ts-test/node_modules/buidler-deploy/src/DeploymentsManager.ts:662:15)
    at DeploymentsManager.loadDeployments (/home/pax/src/buidler-deploy-ts-test/node_modules/buidler-deploy/src/DeploymentsManager.ts:279:12)
    at process._tickCallback (internal/process/next_tick.js:68:7)

Steps to reproduce:

I've taken this test repo, and installed the latest verion of buidler-deploy (yarn install buidler-deploy@latest).

Here is the package.json after the installation:

{
  "devDependencies": {
    "@nomiclabs/buidler": "^1.2.0",
    "@types/chai": "^4.2.11",
    "@types/mocha": "^7.0.2",
    "@types/node": "^13.11.0",
    "buidler-deploy": "^0.3.3",
    "buidler-ethers-v5": "^0.2.0",
    "chai": "^4.2.0",
    "cross-env": "^7.0.2",
    "dotenv": "^8.2.0",
    "ethers": "^5.0.0-beta.180",
    "ts-node": "^8.8.2",
    "typescript": "^3.8.3"
  },
  "scripts": {
    "test": "buidler test",
    "run:rinkeby": "cross-env BUIDLER_NETWORK=rinkeby ts-node --files",
    "deploy:rinkeby": "buidler --network rinkeby deploy",
    "dev": "buidler listen --export contractsInfo.json"
  }
}

Running yarn deploy:rinkeby results in the above error.

Notes

The bug does not appear on buidler-deploy version v0.2.1, only on the upgrade.

It would appear that the issue comes from a mismatch between the types of the chainID in the Buidler env (number) and the DeployManager (hex string).

Also, the error message is confusing (oh the irony :D)

Consider renaming deployIfDifferent to deploy

This is an issue of ergonomics, and I feel that a simple deploy() function is more friendly to new users than deployIfDifferent. It also makes the code a lot easier to read and skim over.

Ideally, deploy should be idempotent anyway so you can make the default behaviour of deploy to skip deployment if nothing has changed. But in case people actually want to make sure deployment is happening every time, it can be possible to add an option to the options object for this:

const deployOpts = {
  skipIfNoChange: false // true by default
}

I'm open to opposing thoughts, just wanted to provide an observation.

Instruct not to deploy a contract

Need to have (or to a document if it already exists) a way to ensure that the next buidler deploy command won't deploy a certain smart contract but assume that it is already deployed (even if it is not).

I have a transaction stalled pending

Copied from NomicFoundation/hardhat#686:

I have a transaction started by Buidler Deployer stalled pending and don't know what to do.

I don't know how to find its nonce. I don't know how to make it unstalled after I would find the nonce.

When I try to deploy again:

npx buidler deploy --network rinkeby 
All contracts have already been compiled, skipping compilation.
Extracting ABIs...
An unexpected error occurred:

TypeError: Cannot read property 'toHexString' of undefined
    at isHexable (/home/porton/Projects/bounties/cryptozon/eth/node_modules/@ethersproject/bytes/lib/index.js:8:21)
    at Object.arrayify (/home/porton/Projects/bounties/cryptozon/eth/node_modules/@ethersproject/bytes/lib/index.js:65:9)
    at Object.decode (/home/porton/Projects/bounties/cryptozon/eth/node_modules/@ethersproject/rlp/lib/index.js:115:25)
    at Object.parse (/home/porton/Projects/bounties/cryptozon/eth/node_modules/@ethersproject/transactions/lib/index.js:124:27)
    at DeploymentsManager.dealWithPendingTransactions (/home/porton/Projects/bounties/cryptozon/eth/node_modules/buidler-deploy/src/DeploymentsManager.ts:284:18)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
    at DeploymentsManager.runDeploy (/home/porton/Projects/bounties/cryptozon/eth/node_modules/buidler-deploy/src/DeploymentsManager.ts:519:9)
    at Environment._runTaskDefinition (/home/porton/Projects/bounties/cryptozon/eth/node_modules/@nomiclabs/buidler/src/internal/core/runtime-environment.ts:196:14)
    at SimpleTaskDefinition.action (/home/porton/Projects/bounties/cryptozon/eth/node_modules/buidler-deploy/src/index.ts:144:7)
    at Environment._runTaskDefinition (/home/porton/Projects/bounties/cryptozon/eth/node_modules/@nomiclabs/buidler/src/internal/core/runtime-environment.ts:196:14)

It is on https://github.com/vporton/cryptozon/tree/105ae3e4da93c164c86b67817793b583e9211193 sources.

Confusing behavior when running `buidler node` vs. `buidler deploy` with no args

I noticed today that if I run npx buidler node with no args, then it will run my deployment scripts and save the output to a folder called localhost_31337, which seems correct. But if I do the same with npx buidler deploy, then nothing is saved. I find this pretty confusing, and it caused me some headaches. I realized if I do npx buidler deploy --network localhost then I get the behavior I was expecting. I just didn't think I would have to do this.

I see the README says Note that running buidler deploy without specifying a network will use the default network. If the default network is an internal ganache or buidlerevm then nothing will happen as a result but this can be used to ensure the deployment is without issues.. I read this and assumed that "default network" would be localhost, and so having to explicitly say --network localhost would be unnecessary. Perhaps that's not correct?

My expectation would be that npx buidler deploy and npx buidler node would produce the same result, except that deploy would not keep the network going the way that node does, and so making the behavior the same would be my preference. If you think this discrepancy is correct though, then I think this could be cleared up in the README by saying something like, "running buidler deploy without any network will not save any deployments, and nor will it use any previously saved deployments. You must specify a network to get this behavior"

`buidler node` does not deploy contracts

Documentation states that buidler node automatically deploy contracts.
That is not happening for me.
I don't know if I'm missing something.

I'm using the following dependencies (among others)

    "@nomiclabs/buidler": "^1.3.8",
    "@nomiclabs/buidler-ethers": "^2.0.0",
    "@nomiclabs/buidler-waffle": "^2.0.0",
    "@nodefactory/buidler-typechain": "^0.2.0",
    "buidler-deploy": "^0.4.13",

Import deployments

The deployments object could provide a deployments.import method to import artifacts previously produced by truffle or even by buidler-deploy in another dependent project packaged as npm.

I don't know if I'm missing something about how to wire dependent projects and its artifacts.

Allow passing of arrays to entries in namedAccounts

buidler-deploy allowed passing arrays of number like so:

namedAccounts: {
  deployer: 0,
  users: [1, 2, 3]
}

which caused getNamedAccounts() to return:

{
  deployer: "0x1234...",
  users: ["0x5678...", "0x9876...", "0x5432"]
}

When using version ^0.7.0-beta.28 with the configuration above, it returns:

{
  deployer: "0x1234..."
}

It would be nice if the old behavior was implemented again.

Error setting the private key for deployer

According to documentation, it's possible to pass the private key in the from field of the deploy method.
When doing so, I'm getting that error from ethers.js: Error: missing provider (operation="estimateGas", code=UNSUPPORTED_OPERATION, version=abstract-signer/5.0.5)
I think you need to additionally pass the provider here https://github.com/wighawag/hardhat-deploy/blob/master/src/helpers.ts#L940

P. S. Maybe there are more ways to set the private key that is gonna be used for contracts deployment to non-local networks?
I found the only option is to pass that private key with the from param

`saveDeployment: false` is ignored

saveDeployment: false is ignored by Buidler Deploy 0.6.0-beta.16.

I have

    ganache: {
      gasLimit: 6000000000,
      defaultBalanceEther: 100,
      url: "http://localhost:8545",
      live: false,
      saveDeployment: false,
    },

but

npx buidler deploy --network ganache

does create files in deployments/ganache/.

Feature request: allow to pass afterDeploy script

It would be very handy to be able to have --afterDeploy <filename> param that would be executed in the same way hardhat run <filename> works.
Imagine you need to generate some additional internal files with the deployment data.

Issue with licenses with etherscan-verify

I believe this line in etherscan-verify is only getting the 2nd character of the matched regex, and as a result I get errors like this:
license :"N" not supported by etherscan, list of supported license can be found here : https://etherscan.io/contract-license-types . This tool expect the SPDX id, except for "None" and "UNLICENSED" when I have a comment like this:

// SPDX-License-Identifier: UNLICENSED

etherscan-verify gives me 'metadat.matchAll is not a function'

TypeError: metadata.matchAll is not a function
    at extractLicenseFromSources (/Users/web3dotguru/contracts/node_modules/buidler-deploy/src/etherscan.ts:36:60)
    at extractOneLicenseFromSourceFile (/Users/web3dotguru/contracts/node_modules/buidler-deploy/src/etherscan.ts:26:20)
    at submit (/Users/web3dotguru/contracts/node_modules/buidler-deploy/src/etherscan.ts:139:29)
    at process._tickCallback (internal/process/next_tick.js:68:7)

The reason is, matchAll is only supported on node 12

Transaction Execution Logging

During a production deployment, it is important to maintain a history of each transaction during a deployment for debugging purposes.

Currently, the output of buidler-deploy contains the contracts and ABIs, but does not include information about other transactions (e.g. execute or rawTransaction`) in the output.

It would be great to have an output that is able to track the addresses and transactions.

To start the conversation about a data structure of a desirable output, see below.

{
  "state": {
    "network_key": "mainnet",
    "network_id": 1,
  },
  "addresses": {
    "CONTRACT_NAME": "0x000",
  },
  "transactions": {
    "76": {
      "id": "TX_HASH",
      "timestamp": 1562613030126,
      "description": "Executing blah",
    }
}

Add a typescript section to README

I know the project already includes typings, which is great!
But I'm not very familiar with type extensions and I'm not sure how to add typing to deploy scripts.
It would be super helpful to have a typescript section to the README explaining what needs to be changed.

Update Diamonds With The Latest Changes From the Diamond Standard

The Diamond Standard recently had a major revision.

The diamond functionality should be updated with the latest version of the standard. Mainly the diamondCut function has changed. It now has two more parameters.

The updated code can be found from the Diamond reference implementation: https://github.com/mudgen/Diamond

See this blog post for what has changed in the standard: https://dev.to/mudgen/update-what-s-new-in-the-diamond-standard-eip-fjk

Consider new API for deploy

Current deployIfDifferent function accepts quite a few arguments and it can be confusing to new users of this plugin. Here is the source of the function I had to look at in order to understand what the function calls actually mean.

source: https://github.com/wighawag/buidler-deploy/blob/177e8c1ac598180c5b827adcbec274a65d3bd4ca/src/utils/eth.js#L225-L233

The current example usage in the README is the following (reformatted with Prettier):

const deployResult = await deployIfDifferent(
  ["data"],
  "GenericMetaTxProcessor",
  { from: deployer, gas: 4000000 },
  "GenericMetaTxProcessor"
);
const deployResult = await deployIfDifferent(
  "data",
  "Token",
  { from: deployer },
  "Token"
);
const deployResult = await deployIfDifferent(
  "data",
  "ERC721BidSale",
  { from: deployer },
  "ERC721BidSale",
  Token.address,
  1,
  3600
);

Using Named Parameters

At first glance, this code doesn't really tell you what is going on (unless you are already very familiar with the API). I suggest we used named parameters as it is a lot more explicit and actually quite flexible if you wish to add or remove features later on.

For example (along with my suggestion from #6):

const deployResult = await deploy({
  name: "GenericMetaTxProcessor",
  contractName: "GenericMetaTxProcessor",
  fieldsToCompare: ["data"],
  options: { from: deployer, gas: 4000000 }
});
const deployResult = await deploy({
  name: "Token",
  contractName: "Token",
  fieldsToCompare: "data",
  options: { from: deployer }
});
const deployResult = await deploy({
  name: "ERC721BidSale",
  contractName: "ERC721BidSale",
  fieldsToCompare: "data",
  options: { from: deployer },
  args: [Token.address, 1, 3600]
});

Further Refinement

A further refinement could be to:

  1. Assume a default value for fieldsToCompare to be data.
  2. Assume contractName to be the same as name (can specify if it's not the case)
  3. In-line the options object

This would lead to:

const deployResult = await deploy({
  name: "GenericMetaTxProcessor",
  from: deployer,
  gas: 4000000,
});
const deployResult = await deploy({
  name: "Token",
  from: deployer,
});
const deployResult = await deploy({
  name: "ERC721BidSale",
  args: [Token.address, 1, 3600],
  from: deployer,
});

Conclusion

This is just to spark some discussion, I hope this has been useful. I think simplifying the API for end-users is an important step of getting adoption.

Deploy one contract several times

Please add the ability to deploy one contract several times (under different names), like:

await deploy("Contract", { name: "Contract1", from: deployer, args: [1] });
await deploy("Contract", { name: "Contract2", from: deployer, args: [2] });

Consider removing `deployments` when the `clean` task is run

I'm not sure if this makes sense, but it might? I kind of expected this to happen. I think this depends on how deployments is meant to be used. For example, is it meant to be committed? If it is, then the current behavior is ok. If not, maybe it should be removed when clean is executed.

Proxy Deployment: need option to force skip Proxy redeployments

I use proxy deployments and I recently changed my compiler version from 0.7.5 to 0.8.0.

I had an EIP173ProxyWithReceive in my project folder so hardhat-deploy used that one. I compiled that one with 0.7.5.

Now here is my issue:

I want to compile all my contracts with 0.8.0 starting today. But if I change the EIP173ProxyWithReceive compiler pragma its bytecode might change too. I suspect that this will cause hardhat-deploy to deploy a new Proxy, even for existing deployments. However, I want existing deployments to keep and reuse their old Proxy deployment.

So I guess I need some sort of option field to override this behaviour. Usually for normal contract deployments I use the skip function export to have control over the deployments the tool makes. However, with proxy deployments there is no deploy script that I am aware of. Probably there is a generic one shared by all contracts that use Proxies. So I guess the skip function cannot be replicated here.

Maybe a field like proxy.noRedeploy might work, whose default value is false:

  await deploy("Foo", {
    from: deployer,
    proxy: {
      proxyContract: "EIP173ProxyWithReceive",
      noRedeploy: true
    },

I am sure there is a more succinct name instead of noRedeploy.

Sample script makes `node` fail for trivial scenario

I created a very simple project with a contract Foo and a script based on the one shown in the README:

module.exports = async ({getNamedAccounts, deployments}) => {
    const {deployIfDifferent, log} = deployments;
    const {deployer} = await getNamedAccounts();

    let contract = await deployments.get('Foo');

    if (!contract) {
        const deployResult = await deployIfDifferent([], "Foo",  {from: deployer, gas: 4000000}, "Foo");
        contract = await deployments.get('Foo');
        if(deployResult.newlyDeployed) {
            log(`Foo deployed at ${contract.address} for ${deployResult.receipt.gasUsed}`);
        }
    }
}

If I then run buidler node I get an error:

Error: No deployment found for: Foo

Am I doing something wrong, or is the script in the readme outdated?

deploy() ignores transaction overrides

The transaction overrides are ignored for deploy transactions:

deployIfDifferent(['data'], name, { gas: 2000000, gasPrice: 3e9 }, name, ...args)

The overrides are propagated correctly to the point where ethers-v5 factory.deploy() is called:
https://github.com/wighawag/buidler-deploy/blob/e20cadb802ba128abeba78b82179fdc28b1d2ad2/src/utils/eth.js#L94

This may be a problem with ethers-v5. Maybe here:
https://github.com/ethers-io/ethers.js/blob/db604aa6afc007f8198fc730b7db4f9ae3876c58/packages/contracts/src.ts/index.ts#L983
where this.interface.deploy.inputs seems to be [], which causes resolveAddresses() to return nothing. If you agree I can post an issue over there.

user deploy scripts may be skipped or repeated

Hello, a few little issues I found with buidler-deploy version 0.1.9 with TypeScript.

  • In src/utils/eth.js, two cases of "catch{}" should be changed to "catch(e) {}".

  • The declared signature of DeploymentsExtension.run() has "tags?: string | string[]" but it only works for the string array. The problem is DeploymentsManager.ts line 266 "for (const tagToFind of tags) { ", which does not distinguish the two types.

  • In DeploymentsExtension.ts the recurseDependencies() function may add the same script twice. At line 294 after recursive call it doesn't re-test the condition scriptsRegisteredToRun which may have changed during the recursive call.

  • If the user defines multiple deploy/xxx.ts scripts and these each define a function deploy() with the same function body, then only the first script will run. The reason is because recurseDependencies() uses the user deploy() function as an object key, and the Javascript engine treats identical function bodies as the same key.

Method 'facets' not found when updating diamond

I get the following error after running deploy the second time on a diamond. It seems like the Loupe facet is not registered / deployed in the initial deployment of the diamond.

buidler: 1.4.3
buidler-deploy: ^0.5.11

Error: ERROR processing /Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/deploy/3-deploy-contracts.js:
Error: call revert exception (method="facets()", errorSignature=null, errorArgs=[null], reason=null, code=CALL_EXCEPTION, version=abi/5.0.3)
    at Logger.makeError (/Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/node_modules/@ethersproject/logger/src.ts/index.ts:205:28)
    at Logger.throwError (/Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/node_modules/@ethersproject/logger/src.ts/index.ts:217:20)
    at Interface.decodeFunctionResult (/Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/node_modules/@ethersproject/contracts/node_modules/@ethersproject/abi/src.ts/interface.ts:326:23)
    at Contract.<anonymous> (/Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/node_modules/@ethersproject/contracts/src.ts/index.ts:291:44)
    at step (/Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/node_modules/@ethersproject/contracts/lib/index.js:46:23)
    at Object.next (/Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/node_modules/@ethersproject/contracts/lib/index.js:27:53)
    at fulfilled (/Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/node_modules/@ethersproject/contracts/lib/index.js:18:58)
    at process._tickCallback (internal/process/next_tick.js:68:7)
    at DeploymentsManager.runDeploy (/Users/johannes/Documents/dev/actus-protocol/ap-monorepo/packages/protocol/node_modules/buidler-deploy/src/DeploymentsManager.ts:851:19)
    at process._tickCallback (internal/process/next_tick.js:68:7)
error Command failed with exit code 1.

deployments.run() disables .save() inside deployment scripts

If you have a deployment script, say deploy/01-Sample.ts, and that script has code along the lines of:

async function deploy(bre, name) {
  const dr = await deployIfDifferent(name, ...)
  if (dr.newlyDeployed) {
    const dep = await deployments.get(name)
    deployments.save(name, dep)
  }
}

Then if you run npx buidler deploy, the save() operation writes to the deployments folder, as expected. If you run npx buidler deploy again (when using a non-transient network, like mainnet), the saved deployments are loaded, checked on-chain, recognized as identical, and not re-deployed. Great!

But if you call deployments.run([name]) from a script or a test, then the contract is deployed but the save() operation does not write any files. Repeating the operation re-deploys the contract.

I'm not sure if this is intended, but the reason is that the noSaving option has default value true here:
https://github.com/wighawag/buidler-deploy/blob/b8a202ff070602d8dd1973ab9a98db58603bd708/src/DeploymentsManager.ts#L146-L155

The option is not exposed in the public type interface here:
https://github.com/wighawag/buidler-deploy/blob/b8a202ff070602d8dd1973ab9a98db58603bd708/src/type-extensions.d.ts#L73-L76

If this is all intended, then how do I achieve this pattern:

  • Using a persistent network like mainnet
  • npx buidler test deploys contracts if needed, but does not re-deploy if already existing

Ignore hidden files

Ignore files that start with . when reading all the files inside the deploy directory.

This was an issue for me because vim saves swap files next to the files you are editing, and so buidler-deploy tries to read them, which of course fails.

Conflict with hardhat-ethers when running tsc

When we run tsc we get these errors:

node_modules/@nomiclabs/hardhat-ethers/dist/src/type-extensions.d.ts:6:10 - error TS2300: Duplicate identifier 'Libraries'.

6     type Libraries = LibrariesT;
           ~~~~~~~~~

  node_modules/hardhat-deploy/dist/src/type-extensions.d.ts:261:10
    261     type Libraries = {
                 ~~~~~~~~~
    'Libraries' was also declared here.

node_modules/hardhat-deploy/dist/src/type-extensions.d.ts:261:10 - error TS2300: Duplicate identifier 'Libraries'.

261     type Libraries = {
             ~~~~~~~~~

  node_modules/@nomiclabs/hardhat-ethers/dist/src/type-extensions.d.ts:6:10
    6     type Libraries = LibrariesT;
               ~~~~~~~~~
    'Libraries' was also declared here.


Found 2 errors.

Unhelpful error when "execute" fails

When an execute call fails, an unhelpful error is thrown, that makes it difficult to debug:

Error: ERROR processing /deploy/deployment.js:
ProviderError: The execution failed due to an exception.
    at HttpProvider.request (node_modules/hardhat/src/internal/core/providers/http.ts:46:19)
    at HDWalletProvider.request (node_modules/hardhat/src/internal/core/providers/accounts.ts:131:34)
    at process._tickCallback (internal/process/next_tick.js:68:7)
    at DeploymentsManager.executeDeployScripts (node_modules/hardhat-deploy/src/DeploymentsManager.ts:991:19)
    at process._tickCallback (internal/process/next_tick.js:68:7)

I wonder if it's possible to pre-generate an error, so that the stack trace points to the failing call in the deployment script

Slow down blockchain calls during deployment

How can I slow down calls to the blockchain during deployment?

Currently I'm rate limited to about 10 calls per second by my JSON RPC provider, unfortunately the deploy plugin makes a bunch of quick successive calls during deployment, hitting my limit.

How can I slow down these calls, or put some sleeping in the middle of the process?

Thanks

Override of useLiteralContent is problematic

The line here that overrides the compiler settings useLiteralContent results in a error when running buidler solidity-coverage. The error is "InternalCompilerError: Metadata too large.".

For now I've simply set an environment variable and skip loading buidler-deploy when coverage is running, but there must be a more elegant solution.

importing hardhat-deploy causes recompile even if no changes

When hardhat-ethers is imported in hardhat.config.ts, the npx hardhat compile task always recompiles all contracts even if they are unchanged.

Sample reproduction:

package.json:

{
  "name": "testhardhat",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {},
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "@types/chai": "^4.2.14",
    "@types/mocha": "^8.0.3",
    "@types/node": "^14.14.2",
    "chai": "^4.2.0",
    "ethers": "^5.0.19",
    "hardhat": "^2.0.1",
    "ts-node": "^9.0.0",
    "typescript": "^4.0.3",
    "web3": "^1.3.0",
    "hardhat-deploy": "^0.7.0-beta.9"
  }
}

hardhat.config.ts:

import 'hardhat-deploy'
const config: any = {
  paths: {
    sources: './contracts',
    artifacts: './artifacts',
    tests: './test'
  },
  solidity: {
    version: '0.5.17'
  }
}
export default config

contracts/Test.sol:

pragma solidity ^0.5.17;
contract Test {
}

Then run

npm install
npx hardhat compile
npx hardhat compile

The second compile should report nothing to do, but instead it recompiles the contract. Observed result:

Compiling 1 file with 0.5.17
Compilation finished successfully
Compiling 1 file with 0.5.17
Compilation finished successfully

Environment:
Inside Docker container
Docker engine 19.03.12
Container OS: Ubuntu 20.04
Node.js: 10.21.0
hardhat: 2.0.1
hardhat-deploy: 0.7.0-beta.14

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.