Git Product home page Git Product logo

cerf's Issues

Add additional variables to CERF output file

The following variables from the NLC calculation should be added to the output csv file.

  • discount_rate
  • annuity_factor
  • levelization_factor_vom
  • levelization_factor_fuel
  • levelization_factor_carbon

Additionally, they should be added to the documentation in the key outputs table https://immm-sfa.github.io/cerf/user_guide.html#key-outputs

Name Description Units
discount_rate interest rate used to determine the present value of future cash flows N/A
annuity_factor factor used to determine the present value of an annual cost based on the life of the asset and the discount rate N/A
levelization_factor_vom factor used to determine the levelized variable operations & maintenance costs over the life of the asset N/A
levelization_factor_fuel factor used to determine the levelized fuel costs over the life of the asset N/A
levelization_factor_carbon factor used to determine the levelized carbon tax costs over the life of the asset N/A

JOSS Review: Tests

Part of openjournals/joss-reviews#3601

I cloned the repository to my laptop (OSX 10.15.7) and created a new Python 3.9 environment using miniconda. I ran python setup.py develop to install the package in editable mode.

I manually installed pytest and pytest-cov and ran the tests using pytest. 19 tests passed.

% pytest
================================================== test session starts ==================================================
platform darwin -- Python 3.9.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /Users/wusher/repository/cerf
plugins: cov-2.12.1
collected 19 items                                                                                                      

cerf/tests/test_compete.py .                                                                                      [  5%]
cerf/tests/test_lmp.py .                                                                                          [ 10%]
cerf/tests/test_nov.py ...                                                                                        [ 26%]
cerf/tests/test_package_data.py ...........                                                                       [ 84%]
cerf/tests/test_read_config.py .                                                                                  [ 89%]
cerf/tests/test_spatial_reader.py .                                                                               [ 94%]
cerf/tests/test_utils.py .                                                                                        [100%]

================================================== 19 passed in 5.25s ===================================================
  • Please document in contributing guidelines how to run the tests
  • I would recommend pulling the tests out from the cerf/cerf subfolder in the repository. These are not needed for the installation of the package. Also, pulling them out of the package means that a python setup.py develop installation is required to run the tests - useful to check the interface to the library.
  • pytest and pytest-cov are not included in develop dependencies (dev under extras_require in setup.py

JOSS REVIEW: Location of data

Part of openjournals/joss-reviews#3601

Hi @crvernon. Many thanks for your quick work on this review. The package is shaping up nicely.

Currently there is a large (200MB) data folder nestled in the package source. This data is copied over to local machines when users install the package, and it bloats your git repository and the package, wasting bandwidth on PyPI and Github.

On the plus side, it makes your work easily reproducible for your particular analysis - you install the package and run the command.

I would advise removing the data from the package, and hosting it elsewhere, perhaps on Zenodo as a dataset, in the Harvard dataverse or other online data deposit. A helper function (perhaps the code currently in package_data.py?) could then retrieve the data (which might be useful for the example notebook), and you would then avoid the repository bloat. This would then be consistent with principle of separation of data and code, where data is specific to a particular analysis; while code is used across analyses.

Alternatively, and if you wish to maximise reproducibility of the analysis for CONUS, deposit the data as advised above, and then create a second Github repo containing a script which installs cerf, downloads the data and runs the analysis.

The focus of this review is the cerf software package, so there's no hard requirement on what you do with the data, although I'd recommend the above. To summarise the steps:

  • remove the data from the repository
  • ensure that the data (or alternative) is available for the tutorial notebook

Update LMP calculation to include various calculation types

Change the LMP component to accommodate different calculation types.

Instead of just taking the average of the LMPs that fall within the range tied to the technology capacity factor, there should be a few options for determining the LMP value. The LMP calculation type should be added as an option in the configuration file.

Suggested calculation types :

  1. Capacity factor-driven (average of LMP range tied to capacity factor)
  2. Annual Average (would be used for wind and would just take the average of all 8760 LMPs)
  3. Specific Hours (would be used for solar, would take the average of specific hours)

The 3rd option could be implemented by also requiring a path in the config yaml to a csv file indicating which hours to use. I'm thinking 1 through 8760 in column 1 and a binary option in column 2 (1 for include, 0 for disregard).

Change to native units

Change expected units for fuel_price from GCAM's $/GJ to CERF's native units of $/MBtu.

Change expected units for fuel_co2_content from GCAM's tons/MWh to CERF's native units of tons/Btu.

Add spatial data reader

Add reader to convert a directory of rasters used for a specific exclusion category (e.g., common, nuclear, etc.) to a binary numpy array where 0 is suitable and >0 is not.

JOSS Review: Installation

Part of openjournals/joss-reviews#3601

I used miniconda to create a new Python 3.9 environment on OSX 10.15.7 and then ran the command pip install cerf. This installed all dependencies without a problem.

  • Improve readme for the quickstart tutorial. Dependencies (e.g. Jupyter) for the tutorial notebooks are not installed, and the quickstart tutorial is not available when installing the cerf package (e.g. it is not placed anywhere on the user computer). This would be confusing for a beginner user. It might be simpler to integrate the information in the notebook into the documentation, and make the notebook downloadable from the docs (with separate installation instructions for the tutorial there).

JOSS Review: Big picture

Part of #3601

@crvernon - As I've dug into the package, I've seen a lot of useful functionality which could benefit other researchers and would make cerf a more widely used package.

The core of what cerf provides, the spatial allocation of generation plants given local marginal prices and feasibility constraints, is something that could see broader uptake of the package in domains outside of CONUS.

During the review, we've "discussed" indirectly the scope of the package, and its purpose. I'm still a little unclear as to whether:

  1. cerf is focussed (limited?) on conducting this analysis for CONUS, in which case it provides data and a collection of classes and methods that implement this, but as a software package is of limited immediate general interest for users outside of CONUS.
  2. cerf is a package which implements a particular approach to spatial allocation for energy plants, providing a suite of software components that can be applied in different jurisdictions to this topic of spatial allocation of energy plants, and with specific data requirements.

Given the already relatively niche attraction of the package functionality, I believe that if 1. is the case then the package is not of sufficiently broad interest for the JOSS audience. Whereas with 2. I can see a clear case for publication in JOSS. But perhaps I am wrong here?

From a development perspective, we've tackled the data issue already (#52), and I think moving from 1. to 2. would require a concerted, but relatively small effort - mainly tying up loose ends (such as #56), especially as the documentation of data requirements etc. are already very good.

I'm happy to discuss this below and it would be good to hear your thoughts? Perhaps it is also worth asking the editor (@fraukewiese) for their input and advice on the above?

JOSS Review: Documentation

Part of openjournals/joss-reviews#3601

The documentation is of high quality, readable and holds useful information relevant to new and experienced users of the package.

  • There is a clear statement of need
  • There are basic installation instructions (use pip) with a list of dependencies
  • A user guide provides two example uses of the package in a Jupyter notebook format.
  • An API reference provides documentation of the methods available to the user
  • There are instructions to potential contributors, but these are currently basic e.g. do not provide information on how to run the tests, or set up a development environment

Recommendations

Contributing

  • As with #40, document in contributing.rst how to set up a development environment, dev dependencies, run the tests, build the documentation etc.
  • Consider setting up issue templates on Github [optional]
  • Document the release process
  • Document continuous integration

Getting started

  • Reorganise quickstart (ticked by #39)
  • Clean up hyperlinks - there are a number of links to the notebook contained in the repo e.g. from the "useful links" on the documentation landing page. Please remove extraneous links, or update them to point to the quickstart in the docs.
  • Typo: https://immm-sfa.github.io/cerf/user_guide.html#configration-file-setup should be "configuration"
  • It is unclear from the about text how generalisable this package is to other jurisdictions. For example, if I want to run this for an African country, what would I need to do? There's probably a large enough "market" in the USA to justify this type of scientific software, but clarifying this would be informative for potential users. In addition, understanding what type of models can link to this software would be useful. For example, could I soft-link the outputs from a regionally aggregate energy system model to cerf to gain extra spatial insight?
  • Document installation for users with Python environment managers such as venv and conda/miniconda. You could provide an environment.yaml download for conda users.

User guide

A readable and accessible guide. A large part of this section is devoted to writing a configuration file, and describing the data inputs required to run the software. The software is very data intensive, and would provide a barrier to using the software outside of the USA. I wonder if the authors could do anything to make it easier for new users of the software?

  • Typo: "The following are the required key, values if your wish to construct your own configuration files:" should read "...required key values if you wish to..."
  • Tutorials section - update to link to quickstart (#39)

API Reference

The API reference is comprehensive. Docstrings are provided for all classes, methods and functions.

  • Page title not rendered -
    Screenshot 2021-08-12 at 11 32 09
  • Tests are visible in API docs as a sub-package - should be fixed by #40

General

  • Authors of the documentation are not visible or attributed. Suggest adding from list in JOSS review issue.
  • Provide hyperlinks/doi to papers which cite or use the software (it's useful for existing or potential users to see papers which use the software)
  • Add "how to cite" section to the readme and documentation bibtex or alternatives for JOSS paper

Other observations

  • Using state_name in expansion_plan key of the configuration file seems to limit generalisability of the tool. Suggest using region_id or alternative in future refactors.

issues and workarounds with xarray and whitebox

A couple notes:

  • newer versions of xarray/rasterio have deprecated xarray.open_rasterio(...) in favor of an additional dependency on rioxarray and rioxarray.open_rasterio(...); CERF could probably implement this new paradigm and update the minimum dependency versions
  • newer versions of whitebox have added support for musl C standard that can be activated in a couple ways (see jblindsay/whitebox-tools#338); this could be utilized to avoid common failures on many linux distros (including deception)

JOSS Review: Installation

Part of #3601

The installation worked as outlined in the documentation.

  • The package size is very large. It would be useful for a user to have a badge of the repository size in the readme and also in the documentation.

  • Some of the data is already loaded by an extra function. Is it possible to load more data (e.g. the folder comp_data) via a function instead of saving it in the repository or reduce the data size?

BUG: JOSS Review: Quickstarter

Part of #3601

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of cerf.

  • (optional) I have confirmed this bug exists on the master branch of cerf.


Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.

Code Sample, a copy-pastable example

# run the configuration for the target year and return a data frame
result_df = cerf.run(config_file, write_output=False)

Problem description

When running the quickstarter jupyter notebook I get an error in line 3

Error message

2021-08-23 09:48:27,147 - root - INFO - Starting CERF model
2021-08-23 09:48:27,322 - root - INFO - Staging data...
2021-08-23 09:48:27,430 - root - INFO - Using 'zones_raster_file':  /home/ws/bw0928/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/data/lmp_zones_1km.img
2021-08-23 09:48:27,473 - root - INFO - Processing locational marginal pricing (LMP)
2021-08-23 09:48:27,474 - root - INFO - Using LMP from default illustrative package data:  /home/ws/bw0928/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/data/illustrative_lmp_8760-per-zone_dollars-per-mwh.zip
2021-08-23 09:48:39,617 - root - INFO - Calculating interconnection costs (IC)
2021-08-23 09:48:39,618 - root - INFO - Using default substations costs from file: /home/ws/bw0928/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/data/costs_per_kv_substation.yml
2021-08-23 09:48:39,621 - root - INFO - Using default substation file: /home/ws/bw0928/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/data/hifld_substations_conus_albers.zip
---------------------------------------------------------------------------
CPLE_OpenFailedError                      Traceback (most recent call last)
rasterio/_base.pyx in rasterio._base.DatasetBase.__init__()

rasterio/_shim.pyx in rasterio._shim.open_dataset()

rasterio/_err.pyx in rasterio._err.exc_wrap_pointer()

CPLE_OpenFailedError: /tmp/tmpv9o5nkwa/cerf_transmission_distance_substations.tif: No such file or directory

During handling of the above exception, another exception occurred:

RasterioIOError                           Traceback (most recent call last)
/tmp/ipykernel_32324/2829821130.py in <module>
      6 
      7 # run the configuration for the target year and return a data frame
----> 8 result_df = cerf.run(config_file, write_output=False)

~/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/process.py in run(config_file, config_dict, write_output, n_jobs, method, initialize_site_data, log_level)
    181 
    182         # process supporting data
--> 183         data = model.stage()
    184 
    185         # process all CERF CONUS states in parallel and store the result as a 2D arrays containing sites as

~/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/model.py in stage(self)
     71 
     72         # prepare all data for state level run
---> 73         data = Stage(self.settings_dict,
     74                      self.lmp_zone_dict,
     75                      self.technology_dict,

~/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/stage.py in __init__(self, settings_dict, lmp_zone_dict, technology_dict, technology_order, infrastructure_dict, initialize_site_data)
     93         # get interconnection cost per tech [tech_order, x, y]
     94         logging.info('Calculating interconnection costs (IC)')
---> 95         self.ic_arr = self.calculate_ic()
     96 
     97         # get NOV array per tech [tech_order, x, y]

~/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/stage.py in calculate_ic(self)
    151 
    152         # instantiate class
--> 153         ic = Interconnection(template_array=self.lmp_arr,
    154                              technology_dict=self.technology_dict,
    155                              technology_order=self.technology_order,

~/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/interconnect.py in __init__(self, template_array, technology_dict, technology_order, substation_file, transmission_costs_dict, transmission_costs_file, pipeline_costs_dict, pipeline_costs_file, pipeline_file, output_rasterized_file, output_dist_file, output_alloc_file, output_cost_file, interconnection_cost_file, output_dir)
    119 
    120         # calculate electricity transmission infrastructure costs
--> 121         self.substation_costs = self.transmission_to_cost_raster(setting='substations')
    122 
    123         # if there are any gas technlogies present, calculate gas pipeline infrastructure costs

~/miniconda3/envs/cerf/lib/python3.9/site-packages/cerf/interconnect.py in transmission_to_cost_raster(self, setting)
    331             alloc_result = wbt.euclidean_allocation(out_rast, out_alloc, callback=suppress_callback)
    332 
--> 333             with rasterio.open(out_dist) as dist:
    334                 dist_arr = dist.read(1)
    335 

~/miniconda3/envs/cerf/lib/python3.9/site-packages/rasterio/env.py in wrapper(*args, **kwds)
    433 
    434         with env_ctor(session=session):
--> 435             return f(*args, **kwds)
    436 
    437     return wrapper

~/miniconda3/envs/cerf/lib/python3.9/site-packages/rasterio/__init__.py in open(fp, mode, driver, width, height, count, crs, transform, dtype, nodata, sharing, **kwargs)
    218         # None.
    219         if mode == 'r':
--> 220             s = DatasetReader(path, driver=driver, sharing=sharing, **kwargs)
    221         elif mode == "r+":
    222             s = get_writer_for_path(path, driver=driver)(

rasterio/_base.pyx in rasterio._base.DatasetBase.__init__()

RasterioIOError: /tmp/tmpv9o5nkwa/cerf_transmission_distance_substations.tif: No such file or directory

Conda environment

# Name                    Version                   Build  Channel
_libgcc_mutex             0.1                 conda_forge    conda-forge
_openmp_mutex             4.5                       1_gnu    conda-forge
affine                    2.3.0                    pypi_0    pypi
attrs                     21.2.0                   pypi_0    pypi
backcall                  0.2.0              pyh9f0ad1d_0    conda-forge
backports                 1.0                        py_2    conda-forge
backports.functools_lru_cache 1.6.4              pyhd8ed1ab_0    conda-forge
ca-certificates           2021.7.5             h06a4308_1  
cerf                      2.0.3                    pypi_0    pypi
certifi                   2021.5.30                pypi_0    pypi
charset-normalizer        2.0.4                    pypi_0    pypi
click                     8.0.1                    pypi_0    pypi
click-plugins             1.1.1                    pypi_0    pypi
cligj                     0.7.2                    pypi_0    pypi
cloudpickle               1.6.0                      py_0    conda-forge
cycler                    0.10.0                   pypi_0    pypi
debugpy                   1.4.1            py39he80948d_0    conda-forge
decorator                 5.0.9              pyhd8ed1ab_0    conda-forge
fiona                     1.8.20                   pypi_0    pypi
geopandas                 0.9.0                    pypi_0    pypi
idna                      3.2                      pypi_0    pypi
ipykernel                 6.2.0            py39hef51801_0    conda-forge
ipython                   7.26.0           py39hef51801_0    conda-forge
ipython_genutils          0.2.0                      py_1    conda-forge
jedi                      0.18.0           py39hf3d152e_2    conda-forge
joblib                    1.0.1                    pypi_0    pypi
jupyter_client            6.1.12             pyhd8ed1ab_0    conda-forge
jupyter_core              4.7.1            py39hf3d152e_0    conda-forge
kiwisolver                1.3.1                    pypi_0    pypi
ld_impl_linux-64          2.36.1               hea4e1c9_2    conda-forge
libffi                    3.3                  h58526e2_2    conda-forge
libgcc-ng                 11.1.0               hc902ee8_8    conda-forge
libgomp                   11.1.0               hc902ee8_8    conda-forge
libsodium                 1.0.18               h516909a_1    conda-forge
libstdcxx-ng              11.1.0               h56837e0_8    conda-forge
matplotlib                3.4.3                    pypi_0    pypi
matplotlib-inline         0.1.2              pyhd8ed1ab_2    conda-forge
munch                     2.5.0                    pypi_0    pypi
ncurses                   6.2                  h58526e2_4    conda-forge
numpy                     1.21.2                   pypi_0    pypi
openssl                   1.1.1k               h7f98852_1    conda-forge
pandas                    1.3.2                    pypi_0    pypi
parso                     0.8.2              pyhd8ed1ab_0    conda-forge
pexpect                   4.8.0              pyh9f0ad1d_2    conda-forge
pickleshare               0.7.5           py39hde42818_1002    conda-forge
pillow                    8.3.1                    pypi_0    pypi
pip                       21.2.4             pyhd8ed1ab_0    conda-forge
prompt-toolkit            3.0.19             pyha770c72_0    conda-forge
ptyprocess                0.7.0              pyhd3deb0d_0    conda-forge
pygments                  2.10.0             pyhd8ed1ab_0    conda-forge
pyparsing                 2.4.7                    pypi_0    pypi
pyproj                    3.1.0                    pypi_0    pypi
python                    3.9.6           h49503c6_1_cpython    conda-forge
python-dateutil           2.8.2              pyhd8ed1ab_0    conda-forge
python_abi                3.9                      2_cp39    conda-forge
pytz                      2021.1                   pypi_0    pypi
pyyaml                    5.4.1                    pypi_0    pypi
pyzmq                     22.2.1           py39h37b5a0c_0    conda-forge
rasterio                  1.2.6                    pypi_0    pypi
readline                  8.1                  h46c0cb4_0    conda-forge
requests                  2.26.0                   pypi_0    pypi
rtree                     0.9.7                    pypi_0    pypi
scipy                     1.7.1                    pypi_0    pypi
seaborn                   0.11.2                   pypi_0    pypi
setuptools                57.4.0           py39hf3d152e_0    conda-forge
shapely                   1.7.1                    pypi_0    pypi
six                       1.16.0             pyh6c4a22f_0    conda-forge
snuggs                    1.4.7                    pypi_0    pypi
spyder-kernels            2.1.0            py39hf3d152e_0    conda-forge
sqlite                    3.36.0               h9cd32fc_0    conda-forge
tk                        8.6.11               h21135ba_0    conda-forge
tornado                   6.1              py39h3811e60_1    conda-forge
traitlets                 5.0.5                      py_0    conda-forge
tzdata                    2021a                he74cb21_1    conda-forge
urllib3                   1.26.6                   pypi_0    pypi
wcwidth                   0.2.5              pyh9f0ad1d_2    conda-forge
wheel                     0.37.0             pyhd8ed1ab_1    conda-forge
whitebox                  1.5.2                    pypi_0    pypi
wurlitzer                 3.0.0            py39hf3d152e_0    conda-forge
xarray                    0.19.0                   pypi_0    pypi
xz                        5.2.5                h516909a_1    conda-forge
zeromq                    4.3.4                h9c3ff4c_0    conda-forge
zlib                      1.2.11            h516909a_1010    conda-forge

Expected Output

modify process.py to generalize siting output file name

process.py currently sets the output csv file for all cerf runs to be named cerf_sited_{run_year}_conus.csv. This should be changed to not use "conus" to fit CERF's generalization and potentially read in the region_name in the same way that run_year is being read in.

out_csv = os.path.join(model.settings_dict.get('output_directory'), f"cerf_sited_{model.settings_dict.get('run_year')}_conus.csv")

Add additional variables to output

The following variables should be added to CERF's output dataframe in addition to those already provided. Note that these variable names follow the adjusted naming referenced in Issue #80

variable name
capacity_factor_fraction
carbon_capture_rate_fraction
fuel_co2_content_tons_per_btu
fuel_price_usd_per_mmbtu
fuel_price_esc_rate_fraction
heat_rate_btu_per_kWh
lifetime_yrs
variable_om_usd_per_mwh
variable_om_esc_rate_fraction
carbon_tax_usd_per_ton
carbon_tax_esc_rate_fraction
operating_cost

Add in option to plot results on a sub-region basis

Is your feature request related to a problem?

When using cerf.plot_siting() to show a map of locations of individual power plant results, it would be beneficial to have the option to show the map on an individual sub-region basis (e.g., state) rather than show the entire region (e.g., US)

Describe the solution you'd like

Add in a parameter to cerf.plot_siting() to specify which sub-region to plot or a parameter to only plot sub-regions included in the output dataframe.

Have the model automatically use the random seed when one is provided

To use a random seed, the user has to both provide a random seed and set the randomize parameter in the config file to False. this should be changed so that randomize is False automatically if a random seed is provided.

# use random seed to create reproducible outcomes
if randomize is False:
    np.random.seed(seed_value)

Add point output

Add latitude, longitude output that contains site information. This is generated from the output sited raster.

JOSS Review: Quickstart

Part of openjournals/joss-reviews#3601

I cloned the repository to my laptop (OSX 10.15.7) and created a new Python 3.9 environment using miniconda. I ran python setup.py develop to install the package in editable mode.

I manually installed the jupyter and notebook packages using miniconda.

I ran all cells in the quickstart.ipynb notebook and I get an error running cell 4:

Processing year:  2010
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
/var/folders/_7/bccbfn110b501smf76mty7xh0000gn/T/ipykernel_22869/3443521374.py in <module>
     11     # do not intialize the run with previously sited data if it is the first time step
     12     if index == 0:
---> 13         result_df = cerf.run(config_file, write_output=False, config_dict=config_dict)
     14 
     15     else:

NameError: name 'config_dict' is not defined

Rename input variables for increased clarity

The following cerf input variables should be renamed:

current variable name new variable name
capacity_factor capacity_factor_fraction
carbon_capture_rate carbon_capture_rate_fraction
fuel_co2_content fuel_co2_content_tons_per_btu
fuel_price fuel_price_usd_per_mmbtu
fuel_esc_rate fuel_price_esc_rate_fraction
heat_rate heat_rate_btu_per_kWh
lifetime lifetime_yrs
unit_size unit_size_mw
variable_om variable_om_usd_per_mwh
variable_cost_esc_rate variable_om_esc_rate_fraction
carbon_tax carbon_tax_usd_per_ton
carbon_esc_rate carbon_tax_esc_rate_fraction

Note that the units follow what cerf uses in its equations after all conversions (whether internal or external) occur. If the internal conversions are removed from the model, these names should still be consistent with what cerf expects.

Remove GitLFS

Remove the need for GitLFS and utilize an install supplement module to retrieve corresponding data from a DataHub or Zenodo archive.

Add an optional output file with a summary of un-sited/sited power plants

Add a yml or similar output file that contains information on the number of sited plants and number of plants that were unable to be sited in a region by technology.

A suggested yaml structure could be the following for each technology, one output file per run:

cerf_biomass_igcc_ccs_recirculating:
    sited_plants: 20
    unsited_plants: 2

If it's an optional file output, it will also need to be added as a new component in the configuration file. One suggested option is to specify new settings in the following way by adding the last two components:

settings:
  run_year: 2025
  output_directory: 
  randomize: true
  seed_value: 0
  region_raster_file: 
  region_abbrev_to_name_file: 
  region_name_to_id_file:
  output_siting_summary_file: true
  siting_summary_output_directory: 

JOSS Review: Requirements

Part of openjournals/joss-reviews#3601

I've noticed in cerf that there are requirements pinned in requirements.txt and in setup.py. You may want to consider removing the =~ pins as they're restrictive - you may miss patches in your dependencies. GIven that the test coverage is reasonable this may not be a risk.

Finish NOV module

Make NOV module return a 3D array where each 2D slice is NOV for the grid space and the third dimension is technology.

Inheritance of buffer and site in output

Write out the suitability mask 3D array that includes updated exclusion from sites and their buffers after all siting has finished. This will be used by subsequent years as an input.

Also store the 1D index of each site so that they can be retired throughout the timeline.

Add additional interconnection cost component for Carbon Capture Sequestration

Power plants with carbon capture sequestration (CCS) must interconnect their plants to Co2 storage. This development will be a simple first run at addressing this consideration which assumes the power plant must build a pipeline to the nearest saline formation ($/km).

The CCS interconnection cost will be added in the exact same way as the natural gas pipeline interconnection cost component already in CERF. This will require the following files:

  1. A shapefile of saline formations
  2. A yaml file with CO2 pipeline costs ($/km)

These components will have to be added in as parameters in the yaml configuration file so that users can point to the appropriate files.

JOSS REVIEW: Location of downloaded data is not findable when packages is installed

Review openjournals/joss-reviews#3601

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of cerf.

  • (optional) I have confirmed this bug exists on the master branch of cerf.


Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.

Code Sample, a copy-pastable example

import cerf

cerf.install_package_data()

Problem description

The current install_package_data() function downloads and places data within the directory where the package is installed.

While this is functional (the package can find the data fine), it's not very helpful for a user either to view the example data, or to modify it to experiment with the package.

If I install cerf using pip pip install cerf, then cerf is placed in a system-defined directory within the Python environment I have defined. For example:

/Users/wusher/miniconda3/envs/cerf_rev_2/lib/python3.9/site-packages/cerf/data/

Expected Output

I think it would be more user friendly and transparent if the data was downloaded to a user-defined location, with a path passed to the function.

To run an analysis this would also require a user to provide a path, or configuration file containing the path to the data.

For example:

import cerf

DATA_PATH = 'data'

cerf.Install_package_data(DATA_PATH)

sample_year = 2030

# load the sample configuration file path for the target year
config_file = cerf.config_file(sample_year, DATA_PATH)

# run the configuration for the target year and return a data frame
result_df = cerf.run(config_file, write_output=False)

result_df.head()

Update CERF output to include retired power plants and all operational power plants

Current CERF configuration: Output DataFrame only includes the retirement date of newly sited power plants based on plant lifetime.

Desired updates:

  1. Change CERF output to include all operational power plants as an optional parameter. This would include historical (non CERF-sited, non-retired) power plants, historical (non-retired) CERF-sited power plants, and newly sited power plants. This will require initializing CERF with an input file of operational power plants that are on the ground prior to any new capacity expansion plans, tracking their expected retirement, and tracking previous CERF sitings.

  2. Produce a second output DataFrame of power plants that have been retired in the last timestep.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.