iamconsortium / pyam Goto Github PK
View Code? Open in Web Editor NEWAnalysis & visualization of energy & climate scenarios
Home Page: https://pyam-iamc.readthedocs.io/
License: Apache License 2.0
Analysis & visualization of energy & climate scenarios
Home Page: https://pyam-iamc.readthedocs.io/
License: Apache License 2.0
I messed up our last release such that it doesn't show the license badge. On the next one, make sure "license": "Apache 2.0"
is in setup.py
and add the following lines to doc/source/index.rst
:
.. image:: https://img.shields.io/pypi/l/pyam-iamc.svg
:target: https://pypi.python.org/pypi/pyam-iamc
In the context of openjournals/joss-reviews#1095 I realized that the last cell in paper.md, is not running. Namely
df
.filter(region='World')
.scatter(x='Primary Energy|Coal', y='Emissions|CO2',
color='Temperature', alpha=0.5, legend=True)
I produces the following error:
KeyError Traceback (most recent call last)
~/miniconda3/envs/pyam/lib/python3.6/site-packages/matplotlib/colors.py in to_rgba(c, alpha)
173 try:
--> 174 rgba = _colors_full_map.cache[c, alpha]
175 except (KeyError, TypeError): # Not in cache, or unhashable.
KeyError: ('T', None)
Here are some informations about the system and conda environment:
active environment : pyam
active env location : /Users/psommer/miniconda3/envs/pyam
shell level : 2
user config file : /Users/psommer/.condarc
populated config files : /Users/psommer/.condarc
conda version : 4.5.11
conda-build version : 3.11.0
python version : 3.6.4.final.0
base environment : /Users/psommer/miniconda3 (writable)
channel URLs : https://conda.anaconda.org/chilipp/label/dev/osx-64
https://conda.anaconda.org/chilipp/label/dev/noarch
https://conda.anaconda.org/chilipp/label/master/osx-64
https://conda.anaconda.org/chilipp/label/master/noarch
https://conda.anaconda.org/chilipp/osx-64
https://conda.anaconda.org/chilipp/noarch
https://conda.anaconda.org/conda-forge/osx-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/osx-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/osx-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/osx-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/pro/osx-64
https://repo.anaconda.com/pkgs/pro/noarch
package cache : /Users/psommer/miniconda3/pkgs
/Users/psommer/.conda/pkgs
envs directories : /Users/psommer/miniconda3/envs
/Users/psommer/.conda/envs
platform : osx-64
user-agent : conda/4.5.11 requests/2.18.4 CPython/3.6.4 Darwin/17.7.0 OSX/10.13.6
UID:GID : 502:20
netrc file : None
offline mode : False
# packages in environment at /Users/psommer/miniconda3/envs/pyam:
#
# Name Version Build Channel
alabaster 0.7.12 <pip>
appnope 0.1.0 <pip>
argparse 1.4.0 <pip>
asn1crypto 0.24.0 py36_1003 conda-forge
atomicwrites 1.2.1 <pip>
attrs 18.2.0 py_0 conda-forge
Babel 2.6.0 <pip>
backcall 0.1.0 <pip>
backports 1.0 py_2 conda-forge
backports.functools_lru_cache 1.5 py_1 conda-forge
blas 1.0 mkl
bleach 3.0.2 <pip>
boost-cpp 1.68.0 h3a22d5f_0 conda-forge
bzip2 1.0.6 1 conda-forge
ca-certificates 2018.11.29 ha4d7672_0 conda-forge
cairo 1.14.12 he6fea26_5 conda-forge
cartopy 0.17.0 py36h81b52dc_0 conda-forge
certifi 2018.11.29 py36_1000 conda-forge
cffi 1.11.5 py36h5e8e0c9_1 conda-forge
chardet 3.0.4 py36_1003 conda-forge
click 7.0 py_0 conda-forge
click-plugins 1.0.4 py_0 conda-forge
cligj 0.5.0 py_0 conda-forge
cloud-sptheme 1.9.4 <pip>
coverage 4.5.2 <pip>
cryptography 2.3.1 py36hdffb7b8_0 conda-forge
cryptography-vectors 2.3.1 py36_1000 conda-forge
curl 7.62.0 h74213dd_0 conda-forge
cycler 0.10.0 py_1 conda-forge
decorator 4.3.0 <pip>
descartes 1.1.0 py_2 conda-forge
docutils 0.14 <pip>
entrypoints 0.2.3 <pip>
expat 2.2.5 hfc679d8_2 conda-forge
fiona 1.8.3 py36hfc77a4a_0 conda-forge
fontconfig 2.13.1 hce039c3_0 conda-forge
freetype 2.9.1 h6debe1e_4 conda-forge
freexl 1.0.5 h470a237_2 conda-forge
gdal 2.3.2 py36hb00a9d7_0 conda-forge
geopandas 0.4.0 py_1 conda-forge
geos 3.6.2 hfc679d8_4 conda-forge
geotiff 1.4.2 h700e5ad_5 conda-forge
gettext 0.19.8.1 h1f1d5ed_1 conda-forge
giflib 5.1.4 h470a237_1 conda-forge
glib 2.55.0 h464dc38_2 conda-forge
hdf4 4.2.13 h951d187_2 conda-forge
hdf5 1.10.3 hc401514_2 conda-forge
icu 58.2 hfc679d8_0 conda-forge
idna 2.7 py36_1002 conda-forge
imagesize 1.1.0 <pip>
intel-openmp 2019.1 144
ipykernel 5.1.0 <pip>
ipython 7.2.0 <pip>
ipython-genutils 0.2.0 <pip>
ipywidgets 7.4.2 <pip>
jedi 0.13.1 <pip>
Jinja2 2.10 <pip>
jpeg 9c h470a237_1 conda-forge
json-c 0.12.1 h470a237_1 conda-forge
jsonschema 2.6.0 <pip>
jupyter 1.0.0 <pip>
jupyter-client 5.2.3 <pip>
jupyter-console 6.0.0 <pip>
jupyter-core 4.4.0 <pip>
kealib 1.4.10 hb88cf67_0 conda-forge
kiwisolver 1.0.1 py36h2d50403_2 conda-forge
krb5 1.16.2 hbb41f41_0 conda-forge
latexcodec 1.0.5 <pip>
libcurl 7.62.0 hbdb9355_0 conda-forge
libcxx 7.0.0 h2d50403_2 conda-forge
libdap4 3.19.1 h18059cb_1 conda-forge
libedit 3.1.20170329 haf1bffa_1 conda-forge
libffi 3.2.1 hfc679d8_5 conda-forge
libgdal 2.3.2 h6a28ee2_0 conda-forge
libgfortran 3.0.1 h93005f0_2
libiconv 1.15 h470a237_3 conda-forge
libkml 1.3.0 he469717_9 conda-forge
libnetcdf 4.6.1 h350cafa_11 conda-forge
libpng 1.6.36 ha92aebf_0 conda-forge
libpq 10.5 hf16a0db_1 conda-forge
libspatialindex 1.8.5 hfc679d8_3 conda-forge
libspatialite 4.3.0a h3b29d86_23 conda-forge
libssh2 1.8.0 h5b517e9_3 conda-forge
libtiff 4.0.10 he6b73bb_0 conda-forge
libxml2 2.9.8 h422b904_5 conda-forge
libxslt 1.1.32 h88dbc4e_2 conda-forge
llvm-meta 7.0.0 0 conda-forge
lxml 4.2.5 py36hc9114bc_0 conda-forge
MarkupSafe 1.1.0 <pip>
matplotlib 3.0.2 py36_1 conda-forge
matplotlib-base 3.0.2 py36hb2d221d_1 conda-forge
mistune 0.8.4 <pip>
mkl 2018.0.3 1
mkl_fft 1.0.10 py36_0 conda-forge
mkl_random 1.0.2 py36_0 conda-forge
more-itertools 4.3.0 <pip>
munch 2.3.2 py_0 conda-forge
nbconvert 5.3.1 <pip>
nbformat 4.4.0 <pip>
nbsphinx 0.3.5 <pip>
ncurses 6.1 hfc679d8_1 conda-forge
nose 1.3.7 <pip>
notebook 5.7.2 <pip>
numpy 1.15.4 py36h6a91979_0
numpy-base 1.15.4 py36h8a80b8c_0
numpydoc 0.8.0 <pip>
olefile 0.46 py_0 conda-forge
openjpeg 2.3.0 h316dc23_3 conda-forge
openssl 1.0.2p h470a237_1 conda-forge
oset 0.1.3 <pip>
owslib 0.17.0 py_0 conda-forge
packaging 18.0 <pip>
pandas 0.23.4 py36hf8a1672_0 conda-forge
pandocfilters 1.4.2 <pip>
parso 0.3.1 <pip>
patsy 0.5.1 py_0 conda-forge
pcre 8.41 hfc679d8_3 conda-forge
pexpect 4.6.0 <pip>
pickleshare 0.7.5 <pip>
pillow 5.3.0 py36hc736899_0 conda-forge
pip 18.1 py36_1000 conda-forge
pixman 0.34.0 h470a237_3 conda-forge
pluggy 0.8.0 <pip>
poppler 0.67.0 h4d7e492_3 conda-forge
poppler-data 0.4.9 0 conda-forge
postgresql 10.5 ha408888_1 conda-forge
proj4 4.9.3 h470a237_8 conda-forge
prometheus-client 0.4.2 <pip>
prompt-toolkit 2.0.7 <pip>
psycopg2 2.7.6.1 py36hdffb7b8_0 conda-forge
ptyprocess 0.6.0 <pip>
py 1.7.0 <pip>
pyam 0.1.2 0 conda-forge
pyam-iamc 0.1.2+8.gd04891c.dirty <pip>
pybtex 0.22.0 <pip>
pybtex-docutils 0.2.1 <pip>
pycparser 2.19 py_0 conda-forge
pyepsg 0.4.0 py_0 conda-forge
Pygments 2.3.0 <pip>
pykdtree 1.3.1 py36h7eb728f_2 conda-forge
pyopenssl 18.0.0 py36_1000 conda-forge
pyparsing 2.3.0 py_0 conda-forge
pyproj 1.9.5.1 py36h508ed2a_6 conda-forge
pysal 1.14.4.post2 py36_1001 conda-forge
pyshp 2.0.0 py_0 conda-forge
pysocks 1.6.8 py36_1002 conda-forge
pytest 4.0.1 <pip>
pytest-cov 2.6.0 <pip>
pytest-mpl 0.10 <pip>
python 3.6.7 h5001a0f_1 conda-forge
python-dateutil 2.7.5 py_0 conda-forge
pytz 2018.7 py_0 conda-forge
pyyaml 3.13 py36h470a237_1 conda-forge
pyzmq 17.1.2 <pip>
qtconsole 4.4.3 <pip>
readline 7.0 haf1bffa_1 conda-forge
requests 2.20.1 py36_1000 conda-forge
rtree 0.8.3 py36_1000 conda-forge
scipy 1.1.0 py36h28f7352_1
seaborn 0.9.0 py_0 conda-forge
Send2Trash 1.5.0 <pip>
setuptools 40.6.2 py36_0 conda-forge
shapely 1.6.4 py36h164cb2d_1 conda-forge
six 1.11.0 py36_1001 conda-forge
snowballstemmer 1.2.1 <pip>
Sphinx 1.8.2 <pip>
sphinx-gallery 0.2.0 <pip>
sphinxcontrib-bibtex 0.4.1 <pip>
sphinxcontrib-fulltoc 1.2.0 <pip>
sphinxcontrib-programoutput 0.11 <pip>
sphinxcontrib-websupport 1.1.0 <pip>
sqlalchemy 1.2.14 py36h470a237_0 conda-forge
sqlite 3.26.0 hb1c47c0_0 conda-forge
statsmodels 0.9.0 py36h7eb728f_0 conda-forge
terminado 0.8.1 <pip>
testpath 0.4.2 <pip>
tk 8.6.9 ha92aebf_0 conda-forge
tornado 5.1.1 py36h470a237_0 conda-forge
traitlets 4.3.2 <pip>
urllib3 1.23 py36_1001 conda-forge
wcwidth 0.1.7 <pip>
webencodings 0.5.1 <pip>
wheel 0.32.3 py36_0 conda-forge
widgetsnbextension 3.4.2 <pip>
xerces-c 3.2.0 h5d6a6da_2 conda-forge
xlrd 1.1.0 py_2 conda-forge
xlsxwriter 1.1.2 py_0 conda-forge
xz 5.2.4 h470a237_1 conda-forge
yaml 0.1.7 h470a237_1 conda-forge
zlib 1.2.11 h470a237_3 conda-forge
include option for r5, iso, etc.
remove it from region_plot function?
notably
16:22 pythia:~/work/iiasa/pyam [master|…64]$ make publish-on-testpypi
rm -rf build dist
/bin/sh: 3: Syntax error: ";" unexpected
Makefile:11: recipe for target 'publish-on-testpypi' failed
make: *** [publish-on-testpypi] Error 2
I haven't looked in detail yet, just saw this @znicholls . Thoughts?
Propose simple renaming of the function to be more explicit
due to multiple model looping
and figure out how to best do the model checking
note we will need to likely add SSD for all models by copying SDN
There are currently two files MANIFEST.IN
and MANIFEST.in
in the repository, which causes issues because the OS thinks it's the same file and gets confused when I try to make any git commits.
Seems that this was introduced by PR #145.
If one does:
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install --upgrade pip
(venv) $ pip install pyam-iamc
then not all of the right dependencies are installed i.e.
>>> import pyam
...
ModuleNotFoundError: No module named 'numpy'
This is solved by #120
Using a metadata column for filtering by a string fails if not the entire column is defined (i.e., has nan
entries.
By default, the line_plot()
should not show the legend if there are more than 13 labels. The warning message that no legend will be shown is returned, but the legend is shown anyway (causing plots that so small that one can't see anything).
Add a function to check whether the 'World' timeseries data is equal to the timeseries data of all regions.
PR #103 fixes a bug of the cumulative()
and fill_series()
functions when timeseries have columns that are defined as floats instead of integers.
This would be great to also check in the IamDataFrame.data.year
column when initializing a new instance, renaming or appending data.
See entries 10 and 11 in the IAMC presentation, reproduced below:
df.variables(filters={'variable': 'Emissions|*', 'level': 2})
['Emissions|CO2',
'Emissions|CO2|Fossil Fuels and Industry',
'Emissions|CO2|Fossil Fuels and Industry|Energy Supply']
and
df.variables(filters={'level': 1})
['Emissions|CO2', 'Price|Carbon', 'Primary Energy', 'Primary Energy|Coal']
When running the unit tests with a up-to-date pandas packages, there are a few warnings because of future deprecations and setting values on copies of dataframes. Unit tests should be reworked to get a clean "all-pass" without warnings.
Currently, using strings for filtering in the filter()
function implements a pseudo-syntax, where *
is used as wildcard and other regexp-arguments are escaped to have a "close-to-string" user experience. Hpwever, for expert users, we should add a feature to explicitly allow regexp (i.e., do not use the pseudo-syntax in teh pattern_match()
).
The current implementation of the append()
function allows merging IamDataFrames
, but only if the model-scenario index does not overlap.
It would be useful to extend this function such that it can be used to append data (rather than a full IamDataFrame
merge) for scenarios that are already present in the IamDataFrame
. A useful kwarg could be verify_integrity
(default True
) to allow merging data to existing scenarios.
Integrity across variables/regions/units should always be enforced.
PyPI contains a packaged named pyam
: “Python package for solving assortative matching models with two-sided heterogeneity.”
pip install pyam
or similar.iamc-pyam
or something?Inline with review openjournals/joss-reviews#1095 I went through the first steps notebook and realized that the to_excel method does not export the data when calling df.to_excel('tutorial_export.xlsx')
at the end.
I installed pyam via conda create -n pyam -c conda-forge pyam
and the documentation requirements via pip install -r doc/requirements.txt
Can you confirm this behaviour? I attached below some system information and the conda environment packages.
active environment : pyam
active env location : /Users/psommer/miniconda3/envs/pyam
shell level : 2
user config file : /Users/psommer/.condarc
populated config files : /Users/psommer/.condarc
conda version : 4.5.11
conda-build version : 3.11.0
python version : 3.6.4.final.0
base environment : /Users/psommer/miniconda3 (writable)
channel URLs : https://conda.anaconda.org/conda-forge/osx-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/osx-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/free/osx-64
https://repo.anaconda.com/pkgs/free/noarch
https://repo.anaconda.com/pkgs/r/osx-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/pro/osx-64
https://repo.anaconda.com/pkgs/pro/noarch
package cache : /Users/psommer/miniconda3/pkgs
/Users/psommer/.conda/pkgs
envs directories : /Users/psommer/miniconda3/envs
/Users/psommer/.conda/envs
platform : osx-64
user-agent : conda/4.5.11 requests/2.18.4 CPython/3.6.4 Darwin/17.7.0 OSX/10.13.6
UID:GID : 502:20
netrc file : None
offline mode : False
# packages in environment at /Users/psommer/miniconda3/envs/pyam:
#
# Name Version Build Channel
alabaster 0.7.12 <pip>
appnope 0.1.0 <pip>
asn1crypto 0.24.0 py36_1003 conda-forge
attrs 18.2.0 py_0 conda-forge
Babel 2.6.0 <pip>
backcall 0.1.0 <pip>
blas 1.0 mkl
bleach 3.0.2 <pip>
boost-cpp 1.68.0 h3a22d5f_0 conda-forge
bzip2 1.0.6 1 conda-forge
ca-certificates 2018.11.29 ha4d7672_0 conda-forge
cairo 1.14.12 h276e583_5 conda-forge
cartopy 0.17.0 py36h81b52dc_0 conda-forge
certifi 2018.11.29 py36_1000 conda-forge
cffi 1.11.5 py36h5e8e0c9_1 conda-forge
chardet 3.0.4 py36_1003 conda-forge
click 7.0 py_0 conda-forge
click-plugins 1.0.4 py_0 conda-forge
cligj 0.5.0 py_0 conda-forge
cloud-sptheme 1.9.4 <pip>
cryptography 2.3.1 py36hdffb7b8_0 conda-forge
cryptography-vectors 2.3.1 py36_1000 conda-forge
curl 7.62.0 h74213dd_0 conda-forge
cycler 0.10.0 py_1 conda-forge
decorator 4.3.0 <pip>
descartes 1.1.0 py_2 conda-forge
docutils 0.14 <pip>
entrypoints 0.2.3 <pip>
expat 2.2.5 hfc679d8_2 conda-forge
fiona 1.8.3 py36hfc77a4a_0 conda-forge
fontconfig 2.13.1 hce039c3_0 conda-forge
freetype 2.9.1 h6debe1e_4 conda-forge
freexl 1.0.5 h470a237_2 conda-forge
gdal 2.3.2 py36hb00a9d7_0 conda-forge
geopandas 0.4.0 py_1 conda-forge
geos 3.6.2 hfc679d8_4 conda-forge
geotiff 1.4.2 h700e5ad_5 conda-forge
gettext 0.19.8.1 h1f1d5ed_1 conda-forge
giflib 5.1.4 h470a237_1 conda-forge
glib 2.56.2 h464dc38_1 conda-forge
hdf4 4.2.13 h951d187_2 conda-forge
hdf5 1.10.3 hc401514_2 conda-forge
icu 58.2 hfc679d8_0 conda-forge
idna 2.7 py36_1002 conda-forge
imagesize 1.1.0 <pip>
intel-openmp 2019.1 144
ipykernel 5.1.0 <pip>
ipython 7.2.0 <pip>
ipython-genutils 0.2.0 <pip>
ipywidgets 7.4.2 <pip>
jedi 0.13.1 <pip>
Jinja2 2.10 <pip>
jpeg 9c h470a237_1 conda-forge
json-c 0.12.1 h470a237_1 conda-forge
jsonschema 2.6.0 <pip>
jupyter 1.0.0 <pip>
jupyter-client 5.2.3 <pip>
jupyter-console 6.0.0 <pip>
jupyter-core 4.4.0 <pip>
kealib 1.4.10 hb88cf67_0 conda-forge
kiwisolver 1.0.1 py36h2d50403_2 conda-forge
krb5 1.16.2 hbb41f41_0 conda-forge
latexcodec 1.0.5 <pip>
libcurl 7.62.0 hbdb9355_0 conda-forge
libcxx 7.0.0 h2d50403_2 conda-forge
libdap4 3.19.1 h18059cb_1 conda-forge
libedit 3.1.20170329 haf1bffa_1 conda-forge
libffi 3.2.1 hfc679d8_5 conda-forge
libgdal 2.3.2 h6a28ee2_0 conda-forge
libgfortran 3.0.1 h93005f0_2
libiconv 1.15 h470a237_3 conda-forge
libkml 1.3.0 he469717_9 conda-forge
libnetcdf 4.6.1 h350cafa_11 conda-forge
libpng 1.6.36 ha92aebf_0 conda-forge
libpq 10.5 hf16a0db_1 conda-forge
libspatialindex 1.8.5 hfc679d8_3 conda-forge
libspatialite 4.3.0a h3b29d86_23 conda-forge
libssh2 1.8.0 h5b517e9_3 conda-forge
libtiff 4.0.10 he6b73bb_0 conda-forge
libxml2 2.9.8 h422b904_5 conda-forge
libxslt 1.1.32 h88dbc4e_2 conda-forge
llvm-meta 7.0.0 0 conda-forge
lxml 4.2.5 py36hc9114bc_0 conda-forge
MarkupSafe 1.1.0 <pip>
matplotlib 3.0.2 py36_1 conda-forge
matplotlib-base 3.0.2 py36hb2d221d_1 conda-forge
mistune 0.8.4 <pip>
mkl 2018.0.3 1
mkl_fft 1.0.10 py36_0 conda-forge
mkl_random 1.0.2 py36_0 conda-forge
munch 2.3.2 py_0 conda-forge
nbconvert 5.3.1 <pip>
nbformat 4.4.0 <pip>
nbsphinx 0.3.5 <pip>
ncurses 6.1 hfc679d8_1 conda-forge
notebook 5.7.2 <pip>
numpy 1.15.4 py36h6a91979_0
numpy-base 1.15.4 py36h8a80b8c_0
numpydoc 0.8.0 <pip>
olefile 0.46 py_0 conda-forge
openjpeg 2.3.0 h316dc23_3 conda-forge
openssl 1.0.2p h470a237_1 conda-forge
oset 0.1.3 <pip>
owslib 0.17.0 py_0 conda-forge
packaging 18.0 <pip>
pandas 0.23.4 py36hf8a1672_0 conda-forge
pandocfilters 1.4.2 <pip>
parso 0.3.1 <pip>
patsy 0.5.1 py_0 conda-forge
pcre 8.41 hfc679d8_3 conda-forge
pexpect 4.6.0 <pip>
pickleshare 0.7.5 <pip>
pillow 5.3.0 py36hc736899_0 conda-forge
pip 18.1 py36_1000 conda-forge
pixman 0.34.0 h470a237_3 conda-forge
poppler 0.67.0 hdf8a1b3_2 conda-forge
poppler-data 0.4.9 0 conda-forge
postgresql 10.5 ha408888_1 conda-forge
proj4 4.9.3 h470a237_8 conda-forge
prometheus-client 0.4.2 <pip>
prompt-toolkit 2.0.7 <pip>
psycopg2 2.7.6.1 py36hdffb7b8_0 conda-forge
ptyprocess 0.6.0 <pip>
pyam 0.1.2 0 conda-forge
pybtex 0.22.0 <pip>
pybtex-docutils 0.2.1 <pip>
pycparser 2.19 py_0 conda-forge
pyepsg 0.4.0 py_0 conda-forge
Pygments 2.3.0 <pip>
pykdtree 1.3.1 py36h7eb728f_2 conda-forge
pyopenssl 18.0.0 py36_1000 conda-forge
pyparsing 2.3.0 py_0 conda-forge
pyproj 1.9.5.1 py36h508ed2a_6 conda-forge
pysal 1.14.4.post2 py36_1001 conda-forge
pyshp 2.0.0 py_0 conda-forge
pysocks 1.6.8 py36_1002 conda-forge
python 3.6.7 h5001a0f_1 conda-forge
python-dateutil 2.7.5 py_0 conda-forge
pytz 2018.7 py_0 conda-forge
pyyaml 3.13 py36h470a237_1 conda-forge
pyzmq 17.1.2 <pip>
qtconsole 4.4.3 <pip>
readline 7.0 haf1bffa_1 conda-forge
requests 2.20.1 py36_1000 conda-forge
rtree 0.8.3 py36_1000 conda-forge
scipy 1.1.0 py36h28f7352_1
seaborn 0.9.0 py_0 conda-forge
Send2Trash 1.5.0 <pip>
setuptools 40.6.2 py36_0 conda-forge
shapely 1.6.4 py36h164cb2d_1 conda-forge
six 1.11.0 py36_1001 conda-forge
snowballstemmer 1.2.1 <pip>
Sphinx 1.8.2 <pip>
sphinx-gallery 0.2.0 <pip>
sphinxcontrib-bibtex 0.4.1 <pip>
sphinxcontrib-fulltoc 1.2.0 <pip>
sphinxcontrib-programoutput 0.11 <pip>
sphinxcontrib-websupport 1.1.0 <pip>
sqlalchemy 1.2.14 py36h470a237_0 conda-forge
sqlite 3.26.0 hb1c47c0_0 conda-forge
statsmodels 0.9.0 py36h7eb728f_0 conda-forge
terminado 0.8.1 <pip>
testpath 0.4.2 <pip>
tk 8.6.9 ha92aebf_0 conda-forge
tornado 5.1.1 py36h470a237_0 conda-forge
traitlets 4.3.2 <pip>
urllib3 1.23 py36_1001 conda-forge
wcwidth 0.1.7 <pip>
webencodings 0.5.1 <pip>
wheel 0.32.3 py36_0 conda-forge
widgetsnbextension 3.4.2 <pip>
xerces-c 3.2.0 h5d6a6da_2 conda-forge
xlrd 1.1.0 py_2 conda-forge
xlsxwriter 1.1.2 py_0 conda-forge
xz 5.2.4 h470a237_1 conda-forge
yaml 0.1.7 h470a237_1 conda-forge
zlib 1.2.11 h470a237_3 conda-forge
Currently, the link to the tutorial in the README.md points to tutorial/pyam_first_steps.ipynb
which does not exist.
It should either point to doc/source/tutorials/pyam_first_steps.ipynb
or the rendered docs page https://software.ene.iiasa.ac.at/pyam/tutorials/pyam_first_steps.html
Currently, loading a metadata file adds models and scenarios even if no data for them exists in the IamDataFrame
. This should be downfiltered before combining (with appropriate logger messages).
I think this is the same test that failed because of an old pandas. we should probably do a check on import and guarantee version.
18:45 blackbear-air.local:~/work/iiasa/pyam-analysis [master|✔]$ pytest
=========================================================================================== test session starts ============================================================================================
platform darwin -- Python 2.7.10, pytest-3.2.0, py-1.4.34, pluggy-0.4.0
rootdir: /Users/gidden/work/iiasa/pyam-analysis, inifile:
collected 15 items
tests/test_core.py ...........F...
================================================================================================= FAILURES =================================================================================================
____________________________________________________________________________________________ test_load_metadata ____________________________________________________________________________________________
test_ia = <pyam_analysis.core.IamDataFrame object at 0x10bf4d910>
def test_load_metadata(test_ia):
> test_ia.load_metadata('tests/testing_metadata.xlsx')
tests/test_core.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
pyam_analysis/core.py:203: in load_metadata
self.cat_color[row['category']] = row['color']
../../../Library/Python/2.7/lib/python/site-packages/pandas/core/series.py:601: in __getitem__
result = self.index.get_value(self, key)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = Index([u'model', u'scenario', u'category'], dtype='object'), series = model test_model
scenario test_scenario
category imported
Name: 0, dtype: object, key = 'color'
def get_value(self, series, key):
"""
Fast lookup of value from 1-dimensional ndarray. Only use this if you
know what you're doing
"""
# if we have something that is Index-like, then
# use this, e.g. DatetimeIndex
s = getattr(series, '_values', None)
if isinstance(s, Index) and is_scalar(key):
try:
return s[key]
except (IndexError, ValueError):
# invalid type as an indexer
pass
s = _values_from_object(series)
k = _values_from_object(key)
k = self._convert_scalar_indexer(k, kind='getitem')
try:
return self._engine.get_value(s, k,
tz=getattr(series.dtype, 'tz', None))
except KeyError as e1:
if len(self) > 0 and self.inferred_type in ['integer', 'boolean']:
raise
try:
return libts.get_value_box(s, key)
except IndexError:
raise
except TypeError:
# generator/iterator-like
if is_iterator(key):
raise InvalidIndexError(key)
else:
> raise e1
E KeyError: 'color'
../../../Library/Python/2.7/lib/python/site-packages/pandas/core/indexes/base.py:2491: KeyError
e.g., energy vs. gdp
at present, it will fill in with np.nan
. I would argue that's not very helpful and we should require a column name be given. @danielhuppmann thoughts?
we want datasets for tutorials/examples and also region mappings, etc. there are a number of patterns out there to choose from, but I'm not sure if anything exists out of the box that would do md5 check-summing, etc.
As discussed with @gidden and @znicholls today, we want to extend an IamDataFrame
to be able to use sub-annual temporal resolution.
Suggestion:
year
columns as integers to pd.datetime
year
column to time
Thinking about this a bit further, there are two issues that we will run into:
pd.datetime
. Instead, "representative time periods" are used, like summer day
and winter night
. We should take a step back and see if we accommodate this behaviour.Any thoughts?
The function categorize()
applies an OR
logic across items when the argument criteria
is a dictionary with more than one item. This should be AND
, so that only scenarios satisfying all criteria (rather than any) are assigned to the metadata category.
df = pyam.read_iiasa_iamc15(
model='*',
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
but
df = pyam.read_iiasa_iamc15(
variable=['Emissions|CO2', 'Primary Energy|Coal'],
region='World'
)
fails with
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-13-513027278b97> in <module>()
1 pyam.read_iiasa_iamc15(
2 variable=['Emissions|CO2', 'Primary Energy|Coal'],
----> 3 region='World'
4 )
~/.local/lib/python3.5/site-packages/pyam/iiasa.py in read_iiasa_iamc15(**kwargs)
194 See Connection.query() for more documentation
195 """
--> 196 return read_iiasa('iamc15', **kwargs)
~/.local/lib/python3.5/site-packages/pyam/iiasa.py in read_iiasa(name, **kwargs)
186 conn = Connection(name)
187 df = conn.query(**kwargs)
--> 188 return IamDataFrame(df[LONG_IDX + ['value']])
189
190
~/.local/lib/python3.5/site-packages/pandas/core/frame.py in __getitem__(self, key)
2680 if isinstance(key, (Series, np.ndarray, Index, list)):
2681 # either boolean or fancy integer index
-> 2682 return self._getitem_array(key)
2683 elif isinstance(key, DataFrame):
2684 return self._getitem_frame(key)
~/.local/lib/python3.5/site-packages/pandas/core/frame.py in _getitem_array(self, key)
2724 return self._take(indexer, axis=0)
2725 else:
-> 2726 indexer = self.loc._convert_to_indexer(key, axis=1)
2727 return self._take(indexer, axis=1)
2728
~/.local/lib/python3.5/site-packages/pandas/core/indexing.py in _convert_to_indexer(self, obj, axis, is_setter)
1325 if mask.any():
1326 raise KeyError('{mask} not in index'
-> 1327 .format(mask=objarr[mask]))
1328
1329 return com._values_from_object(indexer)
KeyError: "['model' 'scenario' 'region' 'variable' 'unit' 'year' 'value'] not in index"
perhaps somehow nothing is being pulled? need to investigate.
At your leisure, but figured it would be good to get some other people used to doing it!
Currently, the IamDataFrame.__init__()
can take a pd.DataFrame
with columns ['model', 'scenario' , ...]
, but it fails if these identifiers are given as an index.
This would be a nice-to-have feature so that the pd.DataFrame
returned by the timeseries()
function can be turned back into an IamDataFrame
. This currently requires a reset_index()
.
For example, if one does
df['scenario'] = df['scenario'].apply(lambda x: x[:-len('-V25')])
this change should also be applied to values in the metadata index.
should be something like
def __setitem__(self, key, value):
if key in self.meta.index:
mapping = dict(zip(self.data[key], value))
# do metadata set as well
return self.__setitem__(key, value)
suggested by @marek-iiasa:
in functions like validate()
, to avoid ambiguity, consider four keywords for low/upp bounds, e.g. lt [less-than], le [less-or equal], ge, gt (alternatively, for those not familiar with latex: <, <=, >=, > ).
When the function line_plot()
calls as_pandas()
to retrieve the joined data+meta dataframe and then plots two metadata columns, it paints each marker once per timestep (because the rows in the joined dataframe are duplicated if you ignore year and value).
It might make more sense to pass the required columns to as_pandas()
and then let it decide whether to join data and meta, or return data or meta only.
Reading from xlsx
fails after merging #19 , trying to identify the issue identifies @lru_cache()
as the culprit.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-3b2a587960f9> in <module>()
----> 1 df = iam.IamDataFrame(data='../data/postparis_dbsnapshot.xlsx', regions=['World'])
C:\ProgramData\Anaconda3\lib\site-packages\pyam_analysis\core.py in __init__(self, data, **kwargs)
55 self.data = read_ix(data, **kwargs)
56 else:
---> 57 self.data = read_files(data, **kwargs)
58
59 # define a dataframe for categorization and other meta-data
C:\ProgramData\Anaconda3\lib\site-packages\pyam_analysis\utils.py in read_files(fnames, *args, **kwargs)
116 for fname in fnames:
117 logger().info('Reading {}'.format(fname))
--> 118 df = read_pandas(fname, *args, **kwargs)
119 dfs.append(format_data(df))
120
TypeError: unhashable type: 'list'
Also, the current implemententation will not work if reading from multiple files with mixed wide vs. long formats (because format_data()
is called after merging the imported dataframes.
I think this is related to issue #40
If you download data directly from the SSP Database e.g.
iamc_db_emissions_CO2.xlsx, then try to read it in you get the following error
xlsx_file_name = os.path.join('..','data','iiasa-ssp-portal','iamc_db_emissions_CO2.xlsx')
df = pyam.IamDataFrame(data=xlsx_file_name)
INFO:root:Reading `../data/iiasa-ssp-portal/iamc_db_emissions_CO2.xlsx`
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-af699d821ed6> in <module>()
1 xlsx_file_name = os.path.join('..','data','iiasa-ssp-portal','iamc_db_emissions_CO2.xlsx')
----> 2 df = pyam.IamDataFrame(data=xlsx_file_name)
/home/zebedee/anaconda2/lib/python2.7/site-packages/pyam/core.pyc in __init__(self, data, **kwargs)
57 self.data = read_ix(data, **kwargs)
58 else:
---> 59 self.data = read_files(data, **kwargs)
60
61 # define a dataframe for categorization and other metadata indicators
/home/zebedee/anaconda2/lib/python2.7/site-packages/pyam/utils.pyc in read_files(fnames, *args, **kwargs)
116 logger().info('Reading `{}`'.format(fname))
117 df = read_pandas(fname, *args, **kwargs)
--> 118 dfs.append(format_data(df))
119
120 return pd.concat(dfs)
/home/zebedee/anaconda2/lib/python2.7/site-packages/pyam/utils.pyc in format_data(df)
135 df = pd.melt(df, id_vars=IAMC_IDX, var_name='year',
136 value_vars=numcols, value_name='value')
--> 137 df['year'] = pd.to_numeric(df['year'])
138
139 # drop NaN's
/home/zebedee/anaconda2/lib/python2.7/site-packages/pandas/core/tools/numeric.pyc in to_numeric(arg, errors, downcast)
131 coerce_numeric = False if errors in ('ignore', 'raise') else True
132 values = lib.maybe_convert_numeric(values, set(),
--> 133 coerce_numeric=coerce_numeric)
134
135 except Exception:
pandas/_libs/src/inference.pyx in pandas._libs.lib.maybe_convert_numeric()
ValueError: Unable to parse string "notes" at position 6941
pandas
can read the files fine so I think it's something in the format_data
method
pd.read_excel(xlsx_file_name).head()
Model Scenario Region Variable Unit 2005 2010 2020 2030 2040 2050 2060 2070 2080 2090 2100 Notes
0 AIM/CGE SSP1-26 R5.2ASIA Emissions|CO2 Mt CO2/yr 11425.5834 11634.7357 13065.2252 10592.7347 6659.8390 4741.0358 3034.2767 1420.4712 677.9487 129.9080 -108.3545 NaN
1 AIM/CGE SSP1-34 R5.2ASIA Emissions|CO2 Mt CO2/yr 11425.5834 11634.7357 13065.2252 12529.0331 11158.2383 9226.0869 6898.9745 5051.7566 3835.2232 2832.4870 2140.2139 NaN
2 AIM/CGE SSP1-45 R5.2ASIA Emissions|CO2 Mt CO2/yr 11425.5834 11634.7357 13089.8659 13477.6715 13734.3888 12765.2632 11071.2456 9231.4057 7124.6789 5455.4447 4253.2434 NaN
3 AIM/CGE SSP1-Baseline R5.2ASIA Emissions|CO2 Mt CO2/yr 11425.5834 11659.0560 13213.7447 14956.9321 15380.0704 14824.3021 13783.3527 11478.1689 9623.0991 8231.9889 7424.4252 NaN
4 AIM/CGE SSP2-26 R5.2ASIA Emissions|CO2 Mt CO2/yr 11425.5834 12507.0952 15734.6777 10866.7656 6753.7836 3843.2592 2228.3360 524.0113 -266.0716 -877.6898 -1315.4186 NaN
pd.read_excel(xlsx_file_name).tail()
Model | Scenario | Region | Variable | Unit | 2005 | 2010 | 2020 | 2030 | 2040 | 2050 | 2060 | 2070 | 2080 | 2090 | 2100 | Notes
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
WITCH-GLOBIOM | SSP5-34 | World | Emissions\|CO2 | Mt CO2/yr | 31922.044354 | 35303.38776 | 37813.933396 | 18924.216520 | 19575.671788 | 18596.184683 | 18180.643978 | 17830.551070 | 17627.297040 | 15580.003928 | 5355.704367 | NaN
WITCH-GLOBIOM | SSP5-45 | World | Emissions\|CO2 | Mt CO2/yr | 31922.044354 | 35303.38776 | 39879.429728 | 32182.096960 | 32652.594997 | 32221.855535 | 31312.519330 | 29779.858417 | 27365.243339 | 24872.151035 | 14496.015076 | NaN
WITCH-GLOBIOM | SSP5-60 | World | Emissions\|CO2 | Mt CO2/yr | 31922.044354 | 35303.38776 | 40954.058910 | 45047.594450 | 51485.070329 | 50229.822011 | 45199.301594 | 45806.835461 | 46193.419795 | 44540.537803 | 38631.339460 | NaN
WITCH-GLOBIOM | SSP5-Baseline | World | Emissions\|CO2 | Mt CO2/yr | 31922.044354 | 35303.38776 | 44321.530964 | 56625.698727 | 72881.159836 | 88326.222670 | 100068.273329 | 109388.272393 | 116226.693340 | 118505.891175 | 114622.309010 | NaN
© SSP Public Database (Version 1.1) https://tn... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN
Something that occured to me during #88 and that might be a good future enhancement.
At the moment, if you have a dataframe with regions China, Japan, Germany, UK, Europe, Asia, World, it is not clear which regions are subsets of which other regions. Hence it's impossible to automate checking of regions over all combinations (i.e. countries should add to continents, continents to World).
If you had regions stored like variables (e.g. World|Europe|UK
, World
, World|Asia
, World|Asia|China
), then you could use the same aggregate checking logic etc. on regional breakdowns as variables.
should be the same as bar chart but limited in one dimension (ie. only one bar)
Should go over run control options
notably
16:22 pythia:~/work/iiasa/pyam [master|…64]$ make publish-on-testpypi
rm -rf build dist
/bin/sh: 3: Syntax error: ";" unexpected
Makefile:11: recipe for target 'publish-on-testpypi' failed
make: *** [publish-on-testpypi] Error 2
I haven't looked in detail yet, just saw this @znicholls . Thoughts?
this allows users to pre-categorize all data for many different workflows
That left with circle
initially should support years on x-axis and stacking of either regions or sector, unit on y-axis label.
As part of the extension to include datetime
as time format in addition to yearly data (see #167), the function read_files()
was simplified so that it only reads one file.
This function should be extended to take a list of files again, keeping in mind that we now need to check that time_col
and extra_cols
are compatible, similar to how append()
is implemented.
In the climate community, we often read data from netCDF files. These files have a huge amount of metadata contained within them (when the file was written, who by, how etc. etc.). For SCMs, we often read these files in, take some average and then use them for whatever.
Given the usefulness of the IamDataFrame, we would want to read them into that format. However, in such a dataframe, there is nowhere obvious to store all the metadata.
Would we consider adding a metadata
attribute, where we could store whatever we wanted (dict of metadata, strings) that we don't check in any way, but is simply provided as a catch all for users who need somewhere to store extra stuff. It doesn't belong in meta
because 'Who wrote the file' isn't something we want in such a meta
dataframe, I don't think?
This should be done once JOSS is done. These include
It seems like there is an issue if you want to use color/marker based on metadata
(df
.filter(region='World')
.filter(Temperature='uncategorized', keep=False)
.scatter(x='Primary Energy|Coal', y='Emissions|CO2', color='Temperature',
legend=True)
)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~/.local/lib/python3.5/site-packages/matplotlib/colors.py in to_rgba(c, alpha)
157 try:
--> 158 rgba = _colors_full_map.cache[c, alpha]
159 except (KeyError, TypeError): # Not in cache, or unhashable.
KeyError: ('T', None)
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
~/.local/lib/python3.5/site-packages/matplotlib/axes/_axes.py in scatter(self, x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, verts, edgecolors, **kwargs)
4209 try: # Then is 'c' acceptable as PathCollection facecolors?
-> 4210 colors = mcolors.to_rgba_array(c)
4211 n_elem = colors.shape[0]
~/.local/lib/python3.5/site-packages/matplotlib/colors.py in to_rgba_array(c, alpha)
258 for i, cc in enumerate(c):
--> 259 result[i] = to_rgba(cc, alpha)
260 return result
~/.local/lib/python3.5/site-packages/matplotlib/colors.py in to_rgba(c, alpha)
159 except (KeyError, TypeError): # Not in cache, or unhashable.
--> 160 rgba = _to_rgba_no_colorcycle(c, alpha)
161 try:
~/.local/lib/python3.5/site-packages/matplotlib/colors.py in _to_rgba_no_colorcycle(c, alpha)
203 pass
--> 204 raise ValueError("Invalid RGBA argument: {!r}".format(orig_c))
205 # tuple color.
ValueError: Invalid RGBA argument: 'T'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-35-12664c2e4825> in <module>()
3 .filter(Temperature='uncategorized', keep=False)
4 .scatter(x='Primary Energy|Coal', y='Emissions|CO2', color='Temperature',
----> 5 legend=True)
6 )
~/.local/lib/python3.5/site-packages/pyam/core.py in scatter(self, x, y, **kwargs)
958 .rename(columns={'value': var})
959 )
--> 960 ax = plotting.scatter(df, x, y, **kwargs)
961 return ax
962
~/.local/lib/python3.5/site-packages/pyam/plotting.py in scatter(df, x, y, ax, legend, title, color, marker, linestyle, cmap, groupby, with_lines, **kwargs)
603 else:
604 kwargs.pop('linestyle') # scatter() can't take a linestyle
--> 605 ax.scatter(group[x], group[y], **kwargs)
606
607 # build legend handles and labels
~/.local/lib/python3.5/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
1783 "the Matplotlib list!)" % (label_namer, func.__name__),
1784 RuntimeWarning, stacklevel=2)
-> 1785 return func(ax, *args, **kwargs)
1786
1787 inner.__doc__ = _add_data_doc(inner.__doc__,
~/.local/lib/python3.5/site-packages/matplotlib/axes/_axes.py in scatter(self, x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, verts, edgecolors, **kwargs)
4229 "or as numbers to be mapped to colors. "
4230 "Here c = {}." # <- beware, could be long depending on c.
-> 4231 .format(c)
4232 )
4233 else:
ValueError: 'c' argument must either be valid as mpl color(s) or as numbers to be mapped to colors. Here c = Temperature.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.