Git Product home page Git Product logo

walter's Introduction

Installation of OpenAlea

Conda Installation

Conda is a package manager that can be installed on Linux, Windows, and Mac. If you have not yet installed conda on your computer, follow these instructions:

Conda Installation. Follow instructions for Miniconda.

Conda Download. Use the Python 2.7 based installation.

Windows, Linux, Mac

Create an environment named walter:

conda create -n walter -c openalea openalea.lpy boost=1.66

Activate the walter environment (do not type 'source' on windows):

[source] activate walter

Install the different packages

conda install -c openalea openalea.mtg alinea.caribu notebook matplotlib pandas scipy

conda install -c openalea -c conda-forge pvlib-python pytables

conda install rpy2

git clone https://github.com/openalea-incubator/astk.git

Then, in the newly downloaded astk directory, run : python setup.py install

conda install nose

walter's People

Contributors

bilalderouich avatar christian34 avatar chrlecarpentier avatar emblanc avatar pradal avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

walter's Issues

Problem with combi_params.csv

Some floats are approximated when read from the sim_scheme.csv files which, in turns, leads to problems in the creation of the combi_params.csv when running several simulations on some computers.

WALTer and PyCharm

From python or terminal, walter.lpy works but launched from pycharm, there are errors.

Problem installing WALTer on Ubuntu 16

I tried to install WALTer on a computer with Ubuntu 16 following the instructions on the wiki, but when I run the command nosetests in the test directory (and in my walter environment), I get an error :

ImportError: No module named lpy

If I try to open lpy in my environment, I get :

RuntimeError: the sip module implements API v11.0 but the PyQGLViewerQt4 module requires API v11.3

After updating sip, I can open lpy normally (still in my walter environment) but I still get the same error when I try to run nosetests

Enhance architecture to manage the scene only once

Currenltly, the information is duplicated between the LSystem, the caribu scene and Walter dict.

Develop a design pattern to manage the scene only once, or to ensure that the synchronisation is corrrect.

Discrepancy between output file and scene

  1. Information in the Blade.csv file are saved from the lstring
  2. Regression of tillers depend on information from the lscene
  3. Lstring and lscene may have different information : for steps when a blade has to be cut, the blade is still present in the lstring but it is no longer in the lscene

-> Output files should contain information consistent with the lscene as it is the information used to make decision on tiller regression

Lpy cut bug fixed

Lpy cut bug has been fixed in recent version of lpy and is available for all platform via conda: code related to management of this bug is therefore deprecated and can be deleted

Bug Ln_final

There is a bug when running simulations with plants with an important number of leaves. This is due to a miscalculation of the t_beg_reg_ind (date at which a plant can start to regress)

Changes in plant orientation

At the beginning of the simulation, the orientation of the plants changes every day (only when there is more than 1 plant in the simulation)

Wrong Blade_sumtemp in output file

In the Blade.csv output file, the Blade_sumtemp column is filled with the Leaf_sumtemp information instead of the actual Blade_sumtemp

ID_simul management

WALTer.lpy generates an ID for each simulation and writes it in a file in the simulation directory
The ID should be generated by the launcher

Saving the simulation time

It would be useful to save the information of the time that each simulation has run. For example, a file could be saved in each id-xx[...]xx output directory with the simulation time.

Walter command line

Add a command line that will execute the model for a given set of parameters.

walter -n project
cd project
walter -i sim_scheme.csv

Improve parallelization

Parallelization of several simulations is currently dealt with with a loop to avoid running more than 3 simulations at the same time.
There is probably a better way to manage parallelization (with a dedicated function) + the number of simulations to run simultaneously should be an input parameter of the run function

Problem with parallelization ?

When running an important number of simulations, my computer is considerably slowed down, it often freezes and I can't use it.
Furthermore, some simulations do not work properly (I do not have all the output files at the end of the simulation but the simulation does not produce an error message) when I run an important number of simulations. But if I run the same simulation alone or with fewer simulations it works, which makes me think that my computer is overwhelmed when there is too many simulations (10 simulations is already too much).
I did not have this kind of problems when simulations were run one at a time.

Change output_path method

Make output_path method, used without args, to point to the latest ('current') simulation that has run, which would be more intuitive than the current behavior (output_path without args points to the last sim_id added to itable, even if the last run has been done for another id) ?

Clarify input management

The input management can/should be clarfied, to avoid mixing parameterisation and code in lpy file and to avoid multiple decalration of parameters / register_parameters dict (see eg #22 #21 #20)
One way to go could be to define default parameters in a specific module and use it as default params once for all in lpy. For cultivar specific conditions may be one other module can handle such data base

Invalid value warning

/home/walter/WALTer/src/walter_data/WALTer.lpy:2479: RuntimeWarning: invalid value encountered in double_scalars
dico_PAR_per_axis[nump][numt][round(Tempcum, 1)] = row.Organ_PAR / Temperature / row.Organ_surface

Caribu error

runing the main test with the following args make caribu:


 lsystem_file = pj(data_access.get_data_dir(), 'WALTer.lpy')
    lsys = Lsystem(lsystem_file,{'params': {'nb_plt_utiles': 1,
                                  'dist_border_x':0,
                                  'dist_border_y': 0,
                                  'nbj': 55,
                                  'beginning_CARIBU': 290

         }})
 File "<string>", line 2216, in EndEach
KeyError: 2

Curved leaves

Adding a curvature of the leaves to improve realism

Adapt WHEAMM competition functions to WALTer outputs

Adapt competition functions with dictionnary lists plant_map={['plant_id'] : x,y}, neighbours_list = {['plant_id'] : [neighbours_id]} and info_plants = {['plant_id'] : surface, height} and later influence_plants = {['plant_id'] : ri} with ri the radius of the influence surface. Make connection between these functions in competition_wheamm.py and WALTer outputs.

"NOT using graph editor observer No module named grapheditor" with both test_light.py and test_light_interception.py

Hi,

I went through steps from this wiki page

conda create -n walter -c openalea openalea.lpy boost=1.66
activate walter
conda install -c openalea -c conda-forge pvlib-python pytables alinea.astk
conda install -c openalea openalea.mtg alinea.caribu notebook matplotlib pandas scipy

Then I went to the cloning directory to run setup.py

cd C:/Users/twang/Documents/GitHub/WALTer
python setup.py install

Here I got the warnings:

reading manifest template 'MANIFEST.in'
warning: no previously-included files matching '__pycache__' found under directory '*'

When I run test_light.py and test_light_interception.py

I got the following message

NOT using graph editor observer No module named grapheditor

I have tried the lines mentioned in this thread but the meassage is still there.

Is this meassage cause a problem for running the project.

And is this message related to the warnings during the installation of setup.py?

Thank you in advanced.

Here is some basic inofromation about my working system.

          conda version : 4.7.10
    conda-build version : 3.17.8
         python version : 2.7.16.final.0
               platform : win-64
             user-agent : conda/4.7.10 requests/2.22.0 CPython/2.7.16 Windows/10 Windows/10.0.17763
          administrator : False
             netrc file : None
           offline mode : False

Important changes in intercepted PAR

The PAR intercepted by each plant is measured for each day in the model. For some plants there are very important changes in the intercepted PAR between one day and the next day

Problems when running simulations with too many decimals

(walter) jenjalbert@cabidos:~/WALTer/testhypercube$ walter -i
sim_scheme_hypercube_court.csv
Traceback (most recent call last):
File "/home/jenjalbert/miniconda2/envs/walter/bin/walter", line 11,
in
load_entry_point('walter', 'console_scripts', 'walter')()
File "/home/jenjalbert/WALTer/src/walter/command_line.py", line 64,
in main
prj = project.Project(args.p)
File "/home/jenjalbert/WALTer/src/walter/project.py", line 70, in
init
self.itable = OrderedDict(self.read_itable(self.dirname / itable))
File "/home/jenjalbert/WALTer/src/walter/project.py", line 154, in
read_itable
return _byteify(json.load(itable))
File
"/home/jenjalbert/miniconda2/envs/walter/lib/python2.7/json/init.py",
line 291, in load
**kw)
File
"/home/jenjalbert/miniconda2/envs/walter/lib/python2.7/json/init.py",
line 339, in loads
return _default_decoder.decode(s)
File
"/home/jenjalbert/miniconda2/envs/walter/lib/python2.7/json/decoder.py",
line 367, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 126095 - line 1 column 126100
(char 126094 - 126099)

GAIprox is computed with the same neighbours for all plants

The dummy variable used to iterate on neighbours of a plant for GAI prox computation is wrong: num_plante is used instead of num_plt, resulting in constant neighbouhood for all plants.
Fixing this typo raise another error and will influence reference computation. I therefore open a new PR for solving this issue

# Calcul du GAI de proximite
for num_plt in range(1,crop_scheme["nplant_peupl"] + 1):
surface_peupl = 0
for num_voisin in dico_voisins[num_plante]:

Error in test_cut_module

In the LSystem used as test, an error occurs in the last line of the EndEach:

    for sid in lscene.todict():
      print 'sid: ' + str(sid)
      print new_lstring[sid].name

The last line (33) raise an error because sid in out of range.

Organs considered for the computation of par/surface

The decision for tiller regression is made by comparing a "par/surface" value with the parameter PARt

Currently, for the computation of the par/surface value :

  • Blades only intercept light when they are photosynthetic
  • All the other organs intercept light (whether they are photosynthetic or not)
  • Blade surfaces are only considered for photosynthetic blades
  • The surfaces of the other organs are always taken into account (whether they are photosynthetic or not)

This needs to be changed to :

  • Only photosynthetic organs can intercept light
  • Only photosynthetic organs are considered for the computation of the tiller surface EXCEPT for sheaths
  • Sheaths are always taken into account for the tiller surface (whether they are photosynthetic or not) to account for the sink constituted by the growing organs hidden inside senescent sheaths

Change the sky

Currently the sky used in WALTer is created via Caribu with the following parameters :
nb_azimuth = 5
nb_zenith = 4
and we use only diffuse radiation.

Important shifts in light interception

The high PAR values comes from the way sumPAR is computed:

  1. On relevant organ, PAR is computed as : Ei (PAR irradiance) * area_of_the_primitive
  2. Tiller_surface is computed as the sum of visible_area of all relevant organ belonging to an axis (with math formulae + estimate of visibility)
  3. organ PAR are summed per axis and divided by tiller_surface and Temperature ???, to get a 'corrected for visibility (and temperature ??)' irradiance (called sum_PAR)

The strange values for sum_PAR occur when the tiller_surface << sum(primitive_area). For example, >at the considered time step, the tiller_surface of axe (1,2,1) of plant 3 is less than 1 % of the sum of >primitive organ area, whereas for other axes it is from 3 to 50 %.

These discrepancies can come from discretisation effects (math cylinder area are not equal to primitive area) and bugs/ problems in visibility computation (it is difficult to assess a priori what part of the primitive is exposed to light when geometric object are intricated and when light comes from several directions)

Probably the best solution will be to switch to the representation of 'only visible' part of organs (PR #9) and compute tiller area as the sum of contributing primitives

Scene unit for Caribu

Currently the scene unit declared for caribu is m whilst the actual scene is in cm (and PAR radiation is per m2) : this induce unexpected output unit for caribu outputs (for Ei, area...).
It will be better to use consistent input to ease the reading of caribu output (see #38)
This may require transformation of light related parameters / output

No light on most plants

I noticed that some plants in my simulations never intercept light : in the PAR_per_axes.csv output file, their value for Sum_PAR is always 0.

For example, when I run sim_scheme_test.txt, (which is the simulation we use for tests but with 50 plants instead of 4), plants 11 to 50 never intercept any light ! (see PAR_per_axes.txt)

I think this might be related to Pull request #10 , when we removed mapping_table @christian34, @chrlecarpentier.
Before #10 , PAR intercepted by each organ was extracted using mapping_table :

for id in mapping_table.keys():
  if new_lstring[id].name == "Blade" and new_lstring[id][0].photosynthetic == True or new_lstring[id].name == "Sheath" or new_lstring[id].name == "Internode" or new_lstring[id].name == "Peduncle" or new_lstring[id].name == "Ear" and new_lstring[id][0].emerged == True:
    if new_lstring[id][0].tiller in axis_census[new_lstring[id][0].num_plante].keys():
      Debug_PAR_dico_df["Ei"].append(res_sky["Ei"][mapping_table[id]])
      if res_sky["Ei"][mapping_table[id]] < 0:
        new_lstring[id][0].PAR = 0
      else:
        new_lstring[id][0].PAR = res_sky["Ei"][mapping_table[id]] * res_sky["area"][mapping_table[id]]

Now we no longer use mapping_table. Instead, ids are extracted from res_sky dictionary :

for id in res_sky['Ei'].keys():
  new_ = lstring[id]
  if ((new_.name == "Blade" and new_[0].photosynthetic == True) or (new_.name in ("Sheath", "Internode", "Peduncle")) or (new_.name == "Ear" and new_[0].emerged)):
    if new_[0].tiller in axis_census[new_[0].num_plante].keys():
      Debug_PAR_dico_df["Ei"].append(res_sky["Ei"][id])
      if res_sky["Ei"][id] < 0:
        new_[0].PAR = 0
      else:
        new_[0].PAR = res_sky["Ei"][id] * res_sky["area"][id]

It is my understanding that res_sky['Ei'] has 1 key for each organ in the simulation. However, there are more objects in lstring because there are one object for each organ + other objects ("[" and "]" for example). I checked, and it seems that lstring always has between 6 times and 12 times more ids than res_sky['Ei'].
This is why I think res_sky["Ei"][id] does not refer to the same organ as lstring[id].
As I see it, when all ids in res_sky['Ei'].keys() have been used, there are still organs in lstring that have not been considered.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.