tri-amdd / mpet Goto Github PK
View Code? Open in Web Editor NEWMPET lithium-ion battery simulator
License: Other
MPET lithium-ion battery simulator
License: Other
This is also a good way to check the consistency of plots, since the text output is obtained by extracting data from plots which are never displayed.
When intra-layer reaction rate in CHR2 is too high, the initialization or the solver fails due to the small time scales.
In particular the characteristics time for reaction is 100 times less than the characteristics time of diffusion in the particle.
Any idea on how to solve the problem ?
May I ask some questions about the CIET current rate expression? The equation is as follows:
Rate[i] = ecd_extras[i](krdc_lyte - kox*c_sld[i])
Eq 57 in the MPET paper is implemented incorrectly in the software, should either be implemented as
F = - D* Dfunc/T * (dmu/dc) \nabla c if assuming not dilute solution model (and dc/dmu \approx Dfunc/T)
or
F = - D * \nabla c. (I am leaning towards implementing this one since this is more physically accurate for comparisons with other software and it is the actual definition of the flux)
@d-cogswell what do you think?
A slight technicality for people who want to do voltage hold on half cells. For half cell models the voltage pulse should only be on the solid electrical potential. However, the half cell concentrated solution models include the kBTln(a_l) calculation implicitly in the electrical potential. This is technically incorrect but only a problem for people who are using the half cell voltage holds with the concentrated electrolyte model (probably only me right now), but keeping track of this in case people do see this problem in the future.
E.g. in the function calc_mu_O in mod_electrodes.py,
mu_lyte = T*np.log(act_lyte) + phi_lyte
should be true for both the SM and dilute models.
The solver crashes when too many particles are put into the system. To replicate this problem, one can construct the simplest possible simulation (e.g. Nvol_c = 1, Nvol_s = 0, Nvol_a = 0, type = homog) and increase the particle number to "Npart_c = 3000." The program crashes while displaying "0% ..ETA."
Changing DAESOLVER settings (MaxNumSteps, IterIC, StepsIC, MemoryIncrements, etc) does not seem to affect this threshold.
Physical memory resources of the machine seems unrelated to the problem. Memory usage can be made minimal but the initialization will still fail. (about 600 MB for 2500 particles)
The threshold for crashing is platform dependent. On windows (daetools 1.9.0), the threshold is about 2453 (plus minus a few randomness each run) particles. On a linux machine the threshold seems to be above 4100, but eventually the same problem happens. This is possibly related to the C compiler used for these platforms rather than the platform itself.
Will probably have to debug daetools to find the exact reason.
git repo root directory
in the setting for coverall to be able to generate a coverage badge, do you have access to these settings @d-cogswell?We should extend the test suites to Ubuntu 18 and Windows (and potentially Mac if we manage to install DAE tools on OSX)
For example, I noticed that csld_{a,c} no longer work. Please check all plots, push any fixes to fix/hdf5, and create a new pull request since the old one has already been merged.
Installing DAE Tools seems to be difficult to install on Ubuntu 18 and Mac. We should make sure that DAE tools works on different platforms
The following piece of code in io_utils.py
is supposed to avoid extrapolation by changing tend
:
if "segments" in ndD_s["profileType"]:
dD_s["tend"] = dD_s["segments_tvec"][-1]
# Pad the last segment so no extrapolation occurs
dD_s["segments_tvec"][-1] = dD_s["tend"]*1.01
However, this change is only applied to the dimensional config dict, after the non-dimensional version was already created from it a few lines above. The simulation used only the non-dimensional config, so the old tend
is used instead of the updated value.
Pytest stopped liking something about the directories being passed as arguments.
Hi,
Great Program! The program seems to have an unintuitive way to set the discharge current and I'm not sure if it's a bug or if I'm doing it wrong. It seems to me that if I set 1C_current_density = 1
and Crate = 1
, the simulation should represent a discharge of 1 A/m2. However, by way of scaling, the program actually calculates the current with
currset = 1C_current_density * Crate * L_c / (e * ( 1 - poros_c) * P_L * rho_s * Damb)
In my simulations I'm looking to discharge cathodes with different thickness (L_c
) at the same discharge current density. But If I double L_c
, so will the discharge current. So I have to scale 1C_current_density
with 1/L_c
to make sure the current doesn't change across cathodes. This doesn't seem to be the intention behind 1C_current_density
.
Furthermore, if you leave 1C_current_density
empty, the current is instead calculated with
currset = Crate * L_c**2 / (3600 * Damb)
This, too, doesn't seem right. At a constant C-rate, the discharge current should be proportional to L_c
, not L_c**2
. The workaround (scaling with 1/L_c
) is simple enough, but is this right?
I'm using version 0.1.8. If it helps, I'm deriving this from
mod_cell.py line 378
configuration.py lines 436-444, 480
derived_values.py lines 173-177, 157-160, 146-149, 141-144, 184-187, 189-198
Thanks
Martin
After running a simulation with dataReporter = mat
followed by dataReporter = hdf5
there will be output_data.mat and output_data.hdf5 in sim_output. This can create significant confusion when plotting data or exporting text data.
Perhaps the wrong path is given, or it should be a config option
It's using the theoretical capacity of the electrode when calculating C-rate.
The config params are written to disk before running the simulation. If parameters change during the simulation, they are not updated in the output parameter files. Is this desirable or would it be better to write them after running the simulation.
This was implemented in 19ba926, but the extra time variables have reappeared in output_data.mat.
Should we add it as an MPET dependency?
The current Dockerfiles are using daetools version 1.9.0. It would be great to update them to use the daetools Conda Forge package.
Plots with distance on the x-axis seem to have problems: elytec, elytep, elytei, csld_c, csld_a, cbarLine_c, cbarLine_a
Csmax is commonly reported in literature.
Need to implement non constant diffusivity NOT in the Dfuncs. The Dfuncs is basically the equivalent of the thermodynamic factor in the electrolyte diffusion and the non constant diffusivities should be accounted for in a different way
Line 495: i_edges_int = -sigma_edges/T * (
There is an extra temperature dependence in this equation that should not be here. This temperature dependence should be in the electrolyte equation (22) in MPET
Regression test definitions do not need to be coded in tests/test_defs.py, the configuration files for the tests already exist in the folders of tests/ref_outputs. The regression suite should look for test folders in tests/ref_outputs to determine the list of tests. This will make adding a new test as simple as adding a folder with the configs and reference solution to ref_outputs.
The code does not have a documentation at the moment. We should build a simple readthedoc using sphinx. We could explain how to install run the code and analyse the results.
I created a dummy github repository and did some testing of this CI to make sure it catches errors. I noticed that if I add new functionality to mpet and create a new test for it, the CI fails at "run tests for base branch" because the new functionality doesn't exist in the base branch. We have plans to add a lot of new functionality and tests. I'm thinking we might need to remove this check against the base branch?
@d-cogswell I think we can divide the tests into 3 categories:
Perhaps we can create 3 workflows from this, of which 1. and 3. are required to pass, but 2. is optional (as we expect that one to fail a lot) let me know what you think
Originally posted by @v1kko in #5 (comment)
See warnings from regression tests, and this notice:
https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
On my machines I get a "fail from tolerance" error for test022, while all other tests pass. This happens both on the development branch and on my refactor/config branch.
@v1kko Perhaps you have some idea on how to debug this? What is GHA doing differently from running the tests by hand?
The default on Github is that if one set of tests fails, the others are canceled. See e.g. PR #71
We could change it to always test all python versions.
When both varDataNew and varDataRef are h5py Datasets, the subtraction operator in compare_tests.py throws a type error. I think the solution is to use numpy array slicing to convert the Datasets to numpy arrays.
mpetruntests.py is now only being used being used by the Bitbucket pipeline, and should be incorporated into the pipeline and removed. run_tests.py is now used to run regression tests.
It should be returned by the electrolyte definitions in props_elyte.py, but revert to the default definition if no function is defined.
I've been reading through the code and I've found a few things that seem inconsistent. Please let me know if I'm misunderstanding what the code is supposed to do.
ffrac is the overall filling fraction of the electrode. In mod_cell.py Lines 166-179 ffrac is calculated with
This equation seems to treat
Instead, lines 166-179 could be replaced with something like
# Define the overall filling fraction in the electrodes
for trode in trodes:
eq = self.CreateEquation("ffrac_{trode}".format(trode=trode))
eq.Residual = self.ffrac[trode]()
tmp = 0
Vol_fracs = config["psd_vol"][trode] / config["psd_vol"][trode].sum(axis=None)
for vInd in range(Nvol[trode]):
for pInd in range(Npart[trode]):
Vj = Vol_fracs[vInd,pInd]
tmp += self.particles[trode][vInd,pInd].cbar() / config[trode,'csmax'] * Vj
eq.Residual -= tmp
Current/voltage differences even at 200 particles are seen with the same PSD. @d-cogswell maybe we should add a flag to read in a specific distribution
The cbar
variable is undefined here:
https://github.com/TRI-AMDD/mpet-dev/blob/f973e536dc6eb2cc55317d82d8944659308a4a3d/mpet/plot/plot_data.py#L541
This seems more like a "particle class" function, maybe also something to change later
Originally posted by @v1kko in #54 (comment)
Mpet takes a particle radius and a computational cell width as input, but there is no guarantee that the number of cells (radius/discretization) will be an integer. When it's not an integer mpet rounds it to an integer, which slightly changes the radius of the particle.
This choice was probably made so the phase-field models would have enough resolution for the phase boundaries. We may want to create an option for a desired number of computational cells, or at least print a warning when the number of cells is not an integer.
It is related to 'blit=True' in the call to FuncAnimation() and the animated titles implemented with _blit_draw().
They were working in v0.1.5. The error is:
ValueError: shape mismatch: objects cannot be broadcast to a single shape
Line 646 of configuration.py throws the error module 'numpy' has no attribute 'int'
it only prints all the volumes from the first/last timepoint
This call to open a continued simulation should not be in the variables loop:
https://github.com/TRI-AMDD/mpet-dev/blob/57ff4b64e9afc733bc58bf2fa468dde2057b44e9/mpet/data_reporting.py#L154
Some of the test case results are not convergent. EG tests 9, 17 results are not converged to 1e-4; test10 (results change at higher tolerance because it is a two layer model and the results are flipped); test11 fails to finish because of convergence issues; tests 13, 15, 18, fail to initialize.
We can't really add new differential variables until these tests are fixed, because their convergence changes these results.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.