Git Product home page Git Product logo

mpet's People

Contributors

d-cogswell avatar edwinksl avatar lightningclaw001 avatar loostrum avatar nicorenaud avatar raybsmith avatar supra17 avatar v1kko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

mpet's Issues

Very stiff problem solver

When intra-layer reaction rate in CHR2 is too high, the initialization or the solver fails due to the small time scales.
In particular the characteristics time for reaction is 100 times less than the characteristics time of diffusion in the particle.
Any idea on how to solve the problem ?

CIET current expression

May I ask some questions about the CIET current rate expression? The equation is as follows:
Rate[i] = ecd_extras[i](krdc_lyte - kox*c_sld[i])

  1. I have read the CIET paper, namely, Theory of coupled ion-electron transfer kinetics, and find that Equation 41, does not use krd*c_lyte, it is simply krd. I think it may be a typo but I am not sure.
  2. c_lyte and c_sld is the normalized concentration. May I ask how is it normalized, i.e., what is the reference concentration of c_lyte and c_sld?
    I would appreciate it if you could clarify. Thanks.

Solid diffusion model for solid state solution implementation error

Eq 57 in the MPET paper is implemented incorrectly in the software, should either be implemented as
F = - D* Dfunc/T * (dmu/dc) \nabla c if assuming not dilute solution model (and dc/dmu \approx Dfunc/T)
or
F = - D * \nabla c. (I am leaning towards implementing this one since this is more physically accurate for comparisons with other software and it is the actual definition of the flux)
@d-cogswell what do you think?

Half cell model voltage hold potential incorrect

A slight technicality for people who want to do voltage hold on half cells. For half cell models the voltage pulse should only be on the solid electrical potential. However, the half cell concentrated solution models include the kBTln(a_l) calculation implicitly in the electrical potential. This is technically incorrect but only a problem for people who are using the half cell voltage holds with the concentrated electrolyte model (probably only me right now), but keeping track of this in case people do see this problem in the future.

E.g. in the function calc_mu_O in mod_electrodes.py,
mu_lyte = T*np.log(act_lyte) + phi_lyte
should be true for both the SM and dilute models.

Solver crashes during initialization when particle # above certain threshold

The solver crashes when too many particles are put into the system. To replicate this problem, one can construct the simplest possible simulation (e.g. Nvol_c = 1, Nvol_s = 0, Nvol_a = 0, type = homog) and increase the particle number to "Npart_c = 3000." The program crashes while displaying "0% ..ETA."

  • Changing DAESOLVER settings (MaxNumSteps, IterIC, StepsIC, MemoryIncrements, etc) does not seem to affect this threshold.

  • Physical memory resources of the machine seems unrelated to the problem. Memory usage can be made minimal but the initialization will still fail. (about 600 MB for 2500 particles)

  • The threshold for crashing is platform dependent. On windows (daetools 1.9.0), the threshold is about 2453 (plus minus a few randomness each run) particles. On a linux machine the threshold seems to be above 4100, but eventually the same problem happens. This is possibly related to the C compiler used for these platforms rather than the platform itself.

  • Will probably have to debug daetools to find the exact reason.

Extending Test suites

We should extend the test suites to Ubuntu 18 and Windows (and potentially Mac if we manage to install DAE tools on OSX)

Pull request #24 broke some plots

For example, I noticed that csld_{a,c} no longer work. Please check all plots, push any fixes to fix/hdf5, and create a new pull request since the old one has already been merged.

DAE Tools Ubuntu 18 and Mac

Installing DAE Tools seems to be difficult to install on Ubuntu 18 and Mac. We should make sure that DAE tools works on different platforms

Updated `tend` value when working with segments is not used

The following piece of code in io_utils.py is supposed to avoid extrapolation by changing tend:

if "segments" in ndD_s["profileType"]:
    dD_s["tend"] = dD_s["segments_tvec"][-1]
    # Pad the last segment so no extrapolation occurs
    dD_s["segments_tvec"][-1] = dD_s["tend"]*1.01

However, this change is only applied to the dimensional config dict, after the non-dimensional version was already created from it a few lines above. The simulation used only the non-dimensional config, so the old tend is used instead of the updated value.

Discharge current proportional to cathode thickness

Hi,

Great Program! The program seems to have an unintuitive way to set the discharge current and I'm not sure if it's a bug or if I'm doing it wrong. It seems to me that if I set 1C_current_density = 1 and Crate = 1, the simulation should represent a discharge of 1 A/m2. However, by way of scaling, the program actually calculates the current with

currset = 1C_current_density * Crate * L_c / (e * ( 1 - poros_c) * P_L * rho_s * Damb)

In my simulations I'm looking to discharge cathodes with different thickness (L_c) at the same discharge current density. But If I double L_c, so will the discharge current. So I have to scale 1C_current_density with 1/L_c to make sure the current doesn't change across cathodes. This doesn't seem to be the intention behind 1C_current_density.

Furthermore, if you leave 1C_current_density empty, the current is instead calculated with

currset = Crate * L_c**2 / (3600 * Damb)

This, too, doesn't seem right. At a constant C-rate, the discharge current should be proportional to L_c, not L_c**2. The workaround (scaling with 1/L_c) is simple enough, but is this right?

I'm using version 0.1.8. If it helps, I'm deriving this from
mod_cell.py line 378
configuration.py lines 436-444, 480
derived_values.py lines 173-177, 157-160, 146-149, 141-144, 184-187, 189-198

Thanks
Martin

Multiple output_data files in sim_output

After running a simulation with dataReporter = mat followed by dataReporter = hdf5 there will be output_data.mat and output_data.hdf5 in sim_output. This can create significant confusion when plotting data or exporting text data.

Config params are saved before running simulation

The config params are written to disk before running the simulation. If parameters change during the simulation, they are not updated in the output parameter files. Is this desirable or would it be better to write them after running the simulation.

Updates to Dockerfiles

The current Dockerfiles are using daetools version 1.9.0. It would be great to update them to use the daetools Conda Forge package.

Fix diffusivity section

Need to implement non constant diffusivity NOT in the Dfuncs. The Dfuncs is basically the equivalent of the thermodynamic factor in the electrolyte diffusion and the non constant diffusivities should be accounted for in a different way

Automatically find regression tests in tests/ref_outputs

Regression test definitions do not need to be coded in tests/test_defs.py, the configuration files for the tests already exist in the folders of tests/ref_outputs. The regression suite should look for test folders in tests/ref_outputs to determine the list of tests. This will make adding a new test as simple as adding a folder with the configs and reference solution to ref_outputs.

Add Documentation

The code does not have a documentation at the moment. We should build a simple readthedoc using sphinx. We could explain how to install run the code and analyse the results.

Seperate workflows for reference and new tests

I created a dummy github repository and did some testing of this CI to make sure it catches errors. I noticed that if I add new functionality to mpet and create a new test for it, the CI fails at "run tests for base branch" because the new functionality doesn't exist in the base branch. We have plans to add a lot of new functionality and tests. I'm thinking we might need to remove this check against the base branch?

@d-cogswell I think we can divide the tests into 3 categories:

  1. tests which are both in the base and modified branch
  2. tests which are new in the modified branch
  3. tests which are in the reference solutions

Perhaps we can create 3 workflows from this, of which 1. and 3. are required to pass, but 2. is optional (as we expect that one to fail a lot) let me know what you think

Originally posted by @v1kko in #5 (comment)

Always test all python versions

The default on Github is that if one set of tests fails, the others are canceled. See e.g. PR #71
We could change it to always test all python versions.

Calculating ffrac from volume fractions

I've been reading through the code and I've found a few things that seem inconsistent. Please let me know if I'm misunderstanding what the code is supposed to do.

ffrac is the overall filling fraction of the electrode. In mod_cell.py Lines 166-179 ffrac is calculated with

$$ffrac = dx \sum_{v} \sum_{p} \bar{c}_{v,p} V_{v,p}$$

This equation seems to treat $V_{v,p}$ as the particle volume as a fraction of the electrode volume $dx$. However, the $V_{v,p}$ is the particle volume as a fraction of the total particle volume in the electrode volume (configuration.py Line 669). This is only equivalent when each electrode volume contains the same total particle volume. This is not the case when the particle size distribution is randomized.

Instead, lines 166-179 could be replaced with something like

# Define the overall filling fraction in the electrodes
for trode in trodes:
    eq = self.CreateEquation("ffrac_{trode}".format(trode=trode))
    eq.Residual = self.ffrac[trode]()
    tmp = 0
    Vol_fracs = config["psd_vol"][trode] / config["psd_vol"][trode].sum(axis=None)
    for vInd in range(Nvol[trode]):
        for pInd in range(Npart[trode]):
            Vj = Vol_fracs[vInd,pInd]
            tmp += self.particles[trode][vInd,pInd].cbar() / config[trode,'csmax'] * Vj
    eq.Residual -= tmp

Inconsistent particle discretization

Mpet takes a particle radius and a computational cell width as input, but there is no guarantee that the number of cells (radius/discretization) will be an integer. When it's not an integer mpet rounds it to an integer, which slightly changes the radius of the particle.

This choice was probably made so the phase-field models would have enough resolution for the phase boundaries. We may want to create an option for a desired number of computational cells, or at least print a warning when the number of cells is not an integer.

Test Case Results Not Convergent

Some of the test case results are not convergent. EG tests 9, 17 results are not converged to 1e-4; test10 (results change at higher tolerance because it is a two layer model and the results are flipped); test11 fails to finish because of convergence issues; tests 13, 15, 18, fail to initialize.

We can't really add new differential variables until these tests are fixed, because their convergence changes these results.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.