Git Product home page Git Product logo

coconut's People

Contributors

awbral avatar hdolfen avatar joris13 avatar ldlcour avatar ldmoerlo avatar mvervaec avatar navaneethkn avatar nicolasdlss avatar npynaert avatar toondm avatar victorvanriet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

coconut's Issues

Switch unittest framework

All unittests must be moved from the old Kratos unittest framework to the standard Python unittest framework.

Tube tube flow tube ringmodel example

Dieter pointed out that an error occurred while running this example. It originated from the JSON files not being updated with the new data structure. That issue is solved now, however a second error occurs:

In the tube ringmodel solver, a ValueError for unphysical pressure is raised in the first iteration.

Update tests/solver_wrappers/abaqus

This test currently does not work due to a change in variables in the code itself.
Most likely it is simply the .json file that needs to be updated

interpolator parameter in ModelPart mappers

The interpolators and transformers are currently distinguished by the parameter self.interpolator. This parameter is set in the __init__ in the base classes MapperInterpolator and MapperTransformer, but it actually stems from before there even were base classes.

This parameter is not required anymore and could be removed. Whether a ModelPart mapper is an interpolator or transformer can instead be checked by looking at the baseclass with isinstance.

Fluent wrapper: read case and data file provided by user

Currently the Fluent wrapper reads in a case-file provided by the user and initializes it (for the initialization a setting is present in the JSON file). I think often a user benefits from having already a converged case to start from, which requires CoCoNuT to use the case and data file and without (re)initializing it. I propose to remove the initialization from CoCoNuT and make the user responsible for providing an initialized (or even solved) data-file.

Abaqus: Speed-up usrInit.f

For some cases USRInit.f takes quite a while. I propose some steps to make it faster and run it less often:

  • Don't run it if timestep_start is not 0 (omit check of ...Elements.dat.
  • Currently two files are written by the user subroutine: ...Faces.dat and ...FacesBis.dat. In Tango they were compared to each other but this test is not expected to fail. UTRACLOAD and DLOAD are responsible for the two files and are both required. I think just omitting the I/O operation in one of those can already be a speed-up (so it should just return F = 0). I would not delete it, but put it in a big conditional that can be set on TRUE in case self.debug is True in the solver wrapper code.

Things that also can be done but I suggest leaving like it is for now because the programming is quite involved:

  • Parallelize it (difficult because of the file that has to be written).
  • It currently does multiple loops: detect when the second loop is started and abort. Currently it aborts when the increment increases (KINC > 1) which helps in cases with subcycling. However even in this first increment it seems to loop 2 times over the load integration points. It could be based on the element number NOEL, but the variable keeping track has to be a global variable (like those defined at the top of the file under BLOCK DATA), such that it can be accessed upon a next function call (i.e. next load integration point). Another caveat is the possible presence of multiple ModelParts, in which this detection of the second loop happens for a different NOEL.

Python tube solver tests executable files

When downloading CoCoNuT as a zip file, the tests of the python solvers don't work, because the execution rights of some files are not copied.
The tests for the python solvers have to be updated, to work even when the installation is done using zip of the files.

Module names of solver wrappers

The solver wrappers are version specific and therefore have a name that contains the version number, e.g. for Fluent the current Python file is called 2019R1.py.
The disadvantage of this approach is that this file cannot be imported, for example

from coconut.coupling_components.solver_wrappers.fluent.2019R1 import SolverWrapperFluent2019R1

won't work. The problem is that the name starts with a number.

Therefore, I propose to change all Python filenames so that they start with a letter, that way the can be imported. For Fluent, I would use v2019R1.py, so that stuff can be imported e.g. as

from coconut.coupling_components.solver_wrappers.fluent.v2019R1 import SolverWrapperFluent2019R1

An import statement like this can be useful when creating a new version of the solver wrapper using inheritance, for example SolverWrapperFluent2020R1 can be based on the superclass SolverWrapperFluent2019R1.

Abaqus updates

As some people have started to use CoCoNuT, some remaining issues of the Abaqus wrapper came to the surface, which mainly have to do with the user-friendliness. As I have encountered the same struggles multiple times when helping people to start with CoCoNuT, I believe that some adaptions are needed to make it more user-friendly and to make the documentation clearer where still needed after improving the code. Below is a general to do list for the Abaqus wrapper, subdivided by importance. Once we get started on this we can make different issues.

Need/important/urgent:

  1. 1. Add check that correct modules are loaded and license servers are set. This can probably be based on a readout of the environment (os.environ module can check the environment). Also look at Fluent wrapper (shutil.which can check availability of commands).
  2. 2. Check default value of ramp, could be mistake in CoCoNuT documentation.
  3. 3. Mention in documentation that application should be put on quasi-static or moderate dissipation, such that the key-word "application" can be found. In the current version not doing this causes an error because the line *Dynamic,application=... is not written, while the Abaqus wrapper tries to find it.
  4. 4. Check if compiler info is well explained in the documentation (optional if we have an automatic software/environment check).
  5. 5. If element occurs twice in surface (different faces of that element, e.g. trailing edge of airfoil has an element on pressure side and suction side), the makeElements method raises a ValueError. One could argue that it is not desireable to have two faces separated by a corner in the same ModelPart, because this could also cause interpolation mistakes. An improvement could be to detect this situation and raise a clear error about what's wrong.

Nice/not urgent:

  1. 1. Line in write_loads method where wrapper writes to file to be read by USR.f: add comment specifically describing the output format here. A mistake won't raise an error but will mess up the results and it is hard to find the source of the mistakes. Alternatively a check could be added to the unit tests.
  2. 2. run_shell method could have a rare race condition on the deletion of the script. Maybe just don't use it and use subprocess.Popen instead as it is not strictly necessary.
  3. 3. Try to make wrapper faster: run datacheck to check the input file and prepare the simulation and 'steal' the generated files. They should at least be the same within a coupling iteration as the only thing that changes are the loads, which Abaqus can't possibly know beforehand and thus cannot affect these files. Then the command continue can be used to do the simulation. As such the pre-processor is only executed once, which can save time on larger meshes.
  4. 4. Clean up code w.r.t. style and comprehensible comments.
  5. 5. Look at nomenclature: thread_name and thread_id mirror variable names in Fluent, but make no sense for Abaqus.

Documentation maintenance

This issue lists all the review tasks of the documentation maintenance that we will do in the next few weeks (finish by 18/3).
I propose to work in 2 stages, so that every file gets checked twice:

  • First a review by the author of the code (or at least someone who is familiar with it) because he will know best which information is missing or wrong. Missing documentation should also be added at this stage, and documentation should be updated if necessary (e.g. because of the new data structure).
  • Later a review by someone else

How to review and change docs

A branch docs-maintenance has been created, which will be merged with master once the maintenance is finished.

  • Checkout branch docs-maintenance
  • Preview docs locally, e.g. python run_mkdocs.py --preview mappers
  • Make changes to .md file
  • Push changes to remote (merge remote in local if necessary)

As everyone will be working on the files allotted to the, there will (should) be no merge conflicts.

Style & layout guide

  • Use code style for:

    • class and method names (and plurals): Model, ModelParts, __init__, finalize
    • JSON keys and values: coupled_solver, delta_t
  • Use code style + italics for:

    • files: run_simulation.py, parameters.json
    • folders: data_structure, /coupling_components/solver_wrappers/mapped.py
  • Use normal text for:

    • referring to abstract principles (i.e. not a specific class): solver wrappers, mappers, coupled solver, data structure
  • Title of markdown page (e.g. # Mappers, the first line of the MarkDown file):

    • should be brief and not repeat information that can be deducted from the structure of the documentation; e.g. for the Fluent solver wrapper: just use # Fluent and not # Fluent solver wrapper, as it is beneath Solver wrappers on the website
    • don't use class names (i.e. no camelcase), so not something like # SolverWrapperOpenFOAM
  • For subtitles (that start with ##, ###,...), you can refer to class names or methods, but use code style in that case.

  • If you refer to other MarkDown pages in the documentation, it can be useful to use a relative link. For more info, see docs documentation.

  • Recommendation for links: it is nice that the link text gives you some information about where the link goes, so

Stage 1: review by code-author (mostly) โ†’ 10/3

  • README.md: Henri
  • coupling_components.md: Henri
  • convergence_criteria.md: Axel
  • coupled_solvers.md: Nicolas
  • models.md: Nicolas
  • mappers.md: Toon
  • predictors.md: Nicolas
  • solver_wrappers.md: Henri
  • abaqus.md: Henri
  • fluent.md: Toon
  • kratos.md: Navaneeth
  • openfoam.md: Mathieu
  • python.md: Nicolas
  • data_structure.md: Navaneeth --> missing!
  • docs.md: Niels
  • tests.md: Axel --> extend to give good overview of different test options (single file, single method, explanation of -b, -v etc)

Stage 2: peer-review โ†’ 18/3

  • README.md: Toon
  • coupling_components.md: Axel
  • convergence_criteria.md: Nicolas
  • coupled_solvers.md: Henri
  • models.md: Henri
  • mappers.md: Mathieu
  • predictors.md: Axel
  • solver_wrappers.md: Toon
  • abaqus.md: Mathieu
  • fluent.md: Niels
  • kratos.md: Axel
  • openfoam.md: Navaneeth
  • python.md: Niels
  • data_structure.md: Mathieu
  • docs.md: Toon
  • tests.md: Navaneeth

Hardcoded Variables in solver wrappers.

The solver wrappers can currently only deal with a small number of Variables, namely the ones used in FSI: pressure, traction and displacement.
In the JSON file however, you can give whatever Variables you like to the Interfaces. This will eventually lead to an runtime error, but probably not a very clear one.

As a solution, we discussed about hardcoding Variables in the current solver wrappers (they can't handle any others anyways), and adding a check in the solver wrapper to see if the Interfaces have the expected Variables. That way the error thrown on runtime would clearly indicate if a wrong Variable is used.

Writing drag or lift values does not occur at the end

When using a monitor in Fluent to write data such as lift and drag coefficients to a file, the writing operation is done at the start of a time step rather than the end, after (ti-menu-load-string "solve dual-time-iterate 1 0"). In this way, the data always lags a time step. Moreover, as the nodes are displaced before this data is printed, it is not a pure 'lagging', but there is also some deviation:
(left is written to the file and right is taken from the gui (correct))

"drag-rset"
"Time Step" "flow-time etc.."
("Time Step" "flow-time" "drag")
0 0 0
1 0.002 0				|	0.028120664
2 0.004 0.02812066304860059		|	0.086690928
3 0.006 0.08669090634451442		|	0.15015176
4 0.008 0.1501516958477103		|	0.21807432
5 0.01 0.2180741961229031		|	0.28941945


"lift-rset"
"Time Step" "flow-time etc.."
("Time Step" "flow-time" "lift")
0 0 0
1 0.002 0				|	-0.0011475312
2 0.004 -0.001147531168200983		|	-0.002402138
3 0.006 -0.002402137926860938		|	-0.0022637938
4 0.008 -0.002263793670246033		|	-0.0019021652
5 0.01 -0.001902165000144555		|	-0.0015391751

This excerpt from the log file clearly shows what is happening:

RECEIVED MESSAGE continue

### timestep 2, iteration 1 ###
solve dual-time-iterate 1 0
Updating solution at time level N...
done.

Updating mesh at time level N... 
Finished UDF move_nodes.

Finished UDF move_nodes.

Finished UDF move_nodes.

Finished UDF move_nodes.

Finished UDF move_nodes.

Finished UDF move_nodes.
done.

iter  continuity  x-velocity  y-velocity     time/iter
!  450 solution is converged
  450  5.0626e-07  2.9174e-09  1.7646e-09  0:00:00    0
 step  flow-time        lift   flow-time        drag        cl-1        cd-1
    2  4.0000e-03 -1.1475e-03  4.0000e-03  2.8121e-02 -1.1475e-03  2.8121e-02
Flow time = 0.004s, time step = 2
solve iterate 1000
 iter  continuity  x-velocity  y-velocity     time/iter
!  450 solution is converged
  450  5.0626e-07  2.9174e-09  1.7646e-09  0:00:31 1000
  460  5.6188e-01  4.5879e-04  2.6474e-04  0:00:31  990

The deviation is rather small at this time, but I expect it will be much larger is the deformation are bigger.

The monitor was defined as follows:

report reference-values velocity 1
report reference-values area 1
report reference-values density 2
solve monitors force drag-coefficient y circleoutside beamtopoutside beamrightoutside beambottomoutside () y y drag.frp n n 1 0
solve monitors force lift-coefficient y circleoutside beamtopoutside beamrightoutside beambottomoutside () y y lift.frp n n 0 1

Abaqus tests: don't use constant pressure and shear stress

In the Abaqus tests a constant pressure (and shear) field is currently applied. It is better to take something that is function of the coordinates, such that an error in the ordering of the load points can be detected. This could be for example a linear varying field or a parabola.

Plotting of the test_examples

I would like to adjust the debug files in order to be able to easily plot the tube_examples.
This would be nice with respect to educational purposes, but also for debugging purposes.
Now, there is a variable self.debug in the solver wrappers which saves the out/input of the solver if the boolean is set on true. However, these files don't always have the most logical name or structure.

The use of these files should not be limited to the tube examples.
Therefore, the structure of these files could be:

  • output structure solver: output_displacement_timeX
x-displacement  y-displacement  z-displacement
0.0                      0.0                      0.0
  • output flow solver: output_load_timeX
pressure  x-traction  y-traction  z-traction
0.0           0.0             0.0             0.0

(possibly same for the input)
When initializing the solver a file should also be generated with the coordinates of the load points or nodes on the fluid-structure interface, in the same order as the output files: output_coordinates

x-coordinate  y-coordinate  z-coordinate
0.0                      0.0                      0.0

Based on these files, the correct points can be selected, ordered and used for plotting in a very generic way.

This may seem as a lot of work, but most of this has already been implemented. Still it will take some time to adjust everything.

Saving interval and restart interval

Save restart

All solver wrappers inherit a parameter save_restart from the coupled solver. This parameter is an int (positive, negative or zero).
Based on the value of the parameters, the solver wrapper should have the following behaviour:

  • 0 save files for restart when the time step is a multiple of save_restart

  • =0 do not save any files for restart
  • <0 same as '>0', but only keep latest files

In this way, the user can specify, the save interval for restart in one place only, as it does not make sense to save files for restart on different times in all solver wrappers and coupled solvers.
The files for restart themselves are solver specific, but should allow a restart from that specific time step.
In the coupled solver the parameter is optional, and its default value is -1 (save every time step, but only keep last). Similarly to timestep_start and delta_t, its value will be usually determined by the coupled solver, nonetheless it can also be provided to the solver wrapper itself for standalone testing. Whether or not this parameter is optional in that case, and what the default value is, is up to you.

Save results

Besides saving files for restart purposes, the user also may want to safe files for postprocessing.
In a calculation with many time steps, this will typically be every so many time steps.
For this reason there should be a save_interval or save_results (the name can be discussed), which allow to set this interval (int >0). In some solverwrappers, the current name is save_iterations, but this is confusing and should be changed.

The specific files which are stored for postprocessing will depend on the solver wrapper. Possibly, they they are the same files that are kept for restart, but not necessarily.
Files which are not required for postprocessing should be removed.

Feel free to comment any remarks on the proposed workflow.

To do

  • Fluent
  • Abaqus
  • OpenFOAM
  • Kratos

Abaqus wrapper: allow mixed interface

Currently, the Abaqus wrapper cannot handle an interface which has mixed element types (e.g. when using a hexdominant mesh).

The error starts with the make_elements function writing the number of load points of the last face it has looped over. This value is read again later on in the code as n_lp. Eventually something goes wrong when the input modelpart is created. In the example of mixed triangular and quadrilateral faces, n_lp is either 3 or 4. For looping over the faces file the product n_elem*n_lp is used as range. But if n_lpwas put on 3, too little lines of the faces file will be read (as some elements have 4 lines). It is not clear where it would go wrong, maybe a bounding box error would be raised about the interfaces, but it's also possible that the interpolation will just be inadequate. If n_lp was put on 4, then the reading of the faces file will be continued too far, raising this error.

I think this can be solved by changing the way we loop over the faces file.

Additional example cases

Currently the example cases are limited to variations of the tube (1D, 2D, 3D, mixed...).
I think it would be useful to have a few more example cases (although not necessarily as many as Tango has), so that the user has some more variation as inspiration. E.g. an example where the Interface consists of several ModelParts would be nice.
Also, I don't think an example should work with each version of the software (e.g. different Fluent versions), because that would be too hard to maintain, and would lead to too much code.

Simplifying the predictors

@nicolasdlss remarked that the predictors can be simplified. More specifically, the number of files can be reduced.

We now have 6 predictor files & classes: base class, constant, linear, quadratic, cubic and legacy.
This could be reduced to just one class, which has a single setting specifying the order of the predictor (0, 1, 2, 3). We could add a special setting for the legacy predictor, which can be used when comparing results with Tango.
This would result in a bit less code in total, but most importantly in fewer files.

Thoughts on this?

OpenFoam issues & improvements

Remaining OpenFoam issues and improvements

  • remove start_time, end_time and dt in json solver_wrapper settings and use delta_t and timestep_start from coupled solver
  • tempDisp and lengthDisp appear outside of OF workingdirectory
  • mkdir error message
  • in the PimpleFoam and InterFoam folders: are there files which can be deleted, because they are generated on compilation?
  • other prints prints to be removed in the future inside solver_wrapper
  • update documentation of example and incorporate README
  • make the name of the solverwrapper consistent with other solver wrappers
  • calculation with OF causes a lot of files: 12 files per timestep per processor => 8 cores, 100 time steps ~= 10 000 files

Fluent wrapper: small memory access bug in udf

There appears to be a small issue with the get_thread_ids function in the UDF. A char variable tmp is initialized on line 77, and values are assigned on line 89. The values are in fact strings (i.e. char arrays) so sufficient memory should be assigned. This has given no problems so far for cases with 1 ModelPart, but for multiple ModelParts a segmentation violation results, which is probably because data is written to unallocated memory. This should be fixed by reserving sufficient space for the tmp variable. As the variable is actually unused, maybe it can be tried to provide NULL instead.

DEFINE_ON_DEMAND(get_thread_ids) {
/* read in thread thread ids, should be called early on */
#if !RP_NODE
char tmp;
int k;
FILE *file;
file = fopen("bcs.txt", "r");
fscanf(file, "%i", &n_threads);
#endif /* !RP_NODE */
host_to_node_int_1(n_threads);
ASSIGN_MEMORY(thread_ids, n_threads, int);
#if !RP_NODE
for (k = 0; k < n_threads; k++) {
fscanf(file, "%s %i", &tmp, &thread_ids[k]);
}
fclose(file);
#endif /* !RP_NODE */
host_to_node_int(thread_ids, n_threads);
}

pressure_traction_test for Fluent v2020R1 fails on cfdclu55

FAIL: test_pressure_traction (coconut.tests.solver_wrappers.fluent.test_v2020R1.TestSolverWrapperFluent2020R1Tube3D)

Traceback (most recent call last):
  File "/cfdfile2/data/fm/nicolas/python_packages/coconut/coconut/tests/solver_wrappers/fluent/test_v2019R1.py", line 314, in test_pressure_traction
    np.testing.assert_allclose(traction[0], traction[2], rtol=1e-11)
  File "/apps/SL6.3/Anaconda/2019.07/python3.7/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 1501, in assert_allclose
    verbose=verbose, header=header, equal_nan=equal_nan)
  File "/apps/SL6.3/Anaconda/2019.07/python3.7/lib/python3.7/site-packages/numpy/testing/_private/utils.py", line 827, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Not equal to tolerance rtol=1e-11, atol=0

 

Mismatch: 0.0217%
Max absolute difference: 3.33955086e-13
Max relative difference: 1.2415593e-11
 x: array([[-10.349147,  -8.341815,  -4.579155],
       [ 46.352385,   2.219764,   2.705838],
       [ 26.331443,  -7.791217,  -2.45801 ],...
 y: array([[-10.349147,  -8.341815,  -4.579155],
       [ 46.352385,   2.219764,   2.705838],
       [ 26.331443,  -7.791217,  -2.45801 ],...

Test uses maximum number of cores on machine (in this case 24 cores).
For other Fluent versions the test does work.
Either tolerance should be lowered or number of cores limited.

Restructuring of tube examples

The number of tube example cases is becoming quite high, as we have combinations of Python solvers, Fluent, OpenFOAM, Abaqus, Kratos, 2D and 3D.
That means that e.g. the setup for the Fluent 2D case is saved several times, in different folders.
Perhaps we could think of a more efficient way to organize this single example, where each solver wrapper has only one set of setup files, and many different combinations are possible, based on e.g. a folder of JSON files.
Something to discuss after the new datastructure has been merged?

Remove X, Y, Z attributes from Nodes.

We recently discussed the usefulness of the X, Y, Z attributes in de Nodes objects.
It seems that these are only used in the Fluent solver wrapper, and that even there they can easily be removed.
The conclusion was therefore that we should remove them from all CoCoNuT code, to avoid confusion and outdated info.

Some more thoughts about the Node coordinates:

  • We use X0, Y0, Z0 to store the original coordinates. These are used everywhere (e.g. mappers). We could rename them to X, Y, Z as we only store one set of coordinates, but I would not do that so that it is always clear that these are the original coordinates from timestep 0.
  • X0, Y0 and Z0 may never be changed (nor the Id of the Node for that matter). Should we somehow protect them, so that they can only be changed during creation of the Node? Constant attributes don't exist in Python, but this post gives some good alternatives.
  • For visualization (post-processing) it can be interesting to plot the deformed geometry. We could then create X, Y, Z automatically during export (to VTK for example) if the Variable "displacements" is available in the ModelPart. That way the X, Y, Z coordinates are definitely up to date, which is currently not always the case.

Mesh software for Kratos

Currently, the mesh for the Kratos example is made by Navaneeth using Salome. However, Salome is not installed on our cluster and therefore has to be installed locally.

Fluent: logfiles

Fluent outputs a lot of log files:
fluent.log
log
transcript.txt
which are actually more or less duplicates.
The fluent.log file contains at first glance all information.
The others should not be made/kept.

Abaqus: use case setup scripts in examples

I noticed that for the Fluent-side of the test examples, the mesh file and case file is created using journals. Currently we start from a complete inp-file in Abaqus. As we refer a lot to the Python interface of Abaqus in the documentation, it's maybe nice to have some examples using a Python script to generate the .inp file, either starting from scratch or starting from an existing mesh file. (I have some code to create a continuum elements mesh file for a clamped-clamped tube in case we would choose that.)

Docs test_single_solver

The test_single_solver is a coupled_solver that can be used to test the single-physics solvers separately. This is very useful, e.g. for debugging Fluent and Abaqus test-cases before running the full FSI simulation. Currently, users have an example case of the test_single_solver to base their code on, but there is no documentation yet.

So: documentation must be made for the test_single_solver, to detail how it works, and what all the possible options are.

Speeding up operations with Interface objects

Currently the cost of the coupling algorithms (without the solver wrappers!) is mainly determined by the inefficient implementation of the Interfaces. Of course, the cost of the coupling algorithm is (usually) negligible with respect to that of the solvers, so this is not a pressing issue. However, I still think some redesign of the Interface class to boost performance here would be a good thing for the code.

As illustration, I show here the cost of Interface operations, for an Interface with 10 000 DoFs:
image
The deepcopy function (which is also used in add) is clearly the bottleneck here, although the get and set functions are also quite slow.

In an actual simulation with 10 000 DoFs, this results in the following time spent on different stuff during the coupling:
image

This shows that most of the overhead can be avoided by not adding and subtracting with Interfaces but with ndarrays. However, SetNumpyArray is still a lot slower than all of the linear algebra together (includes numpy.linalg.qr and scipy.linalg.solve_triangular).

I think storing data in ndarrays would be much more efficient. However, as Interfaces are only references to ModelParts, that would require really big changes to the whole data_structure of CoCoNuT, so perhaps that is too extreme.

Anyway, this is something we should think of and discuss somewhere in the future in my opinion.

CFD and CSM tests for case setup

To make it easier for (new) users to get their case up and running, I thought it would be interesting to provide the possibility of testing the involved solvers separately, as was possible in Tango.

After a discussion on the implementation with @ldmoerlo, @nicolasdlss, @toondm we thought about doing this as outlined hereunder (correct me if I made a mistake or forgot something).

The idea is:

  • This is a case-oriented test and not directly intended to test the code of the SolverWrapper. Rather it should test the setup of the user-provided files and settings
  • The testing of one solver should not rely on the other solver
  • The testing should be easy and clear to set up
  • Required alterations to the .json-file should be minimal and not interfere with the settings that are used for running the complete simulation
  • Preferably the testing does not require alterations to the SolverWrapper codes

What is tested?

  • If the solver receives correctly formatted input from CoCoNuT then it will run and produce output

The suggested implementation is:

  • Define a new coupled_solver for testing a single solver (e.g. test_single_solver)
  • A normal coupled_solver requires "settings" in the .json file. The test_single_solver will require "test_settings" and will also be able to read from "settings"
  • The basic testing happens by providing input files with 0-displacement for the flow solvers and 0 loads for the structural solvers
  • Add the possibility of performing slightly more advanced testing by defining python functions that return load or displacement based on x, y, z of the node/loadpoint and possibly time (test-function)
  • The "test_settings" will need to define what solver needs to be tested (and possibly later on an identifier for the function that the user wants to apply as test-function)
  • For mapped solvers only the deepest level will be tested and not the mapping itself

Loading software on coconut runtime

Currently, all software that is used by the different solver wrappers has to be preloaded in the terminal where the coconut simulation will be launched. This gives some issues, e.g.:

  • Kratos & coconut require different Python versions, so that they cannot work in the same terminal. This was temporarily fixed by loading the Kratos module on runtime, based on a bash-command in parameters.json.
  • Fluent (ANSYS) & OpenFOAM cannot be loaded together, something MPI-related throws an error when starting Fluent. This is not yet an issue, as we cannot couple these solvers at the moment, but could become one in the future.

Hence the idea to load the software for a specific solver-wrapper only on runtime. This would also give the user a clear location where to write system-specific commands to configure software and licenses.

Abaqus: use of surfaceid's and its order

Currently the JSON-file for Abaqus requires a list called surfaceIDs. This list should contains strings referring to the nodesets (as defined in the .inp file) corresponding to the surfaces with fixed names MOVINGSURFACE, where is 0, 1, etc. This list must have the same order as these surfaces.
This seems rather complicated especially for a new user. It is explained in the documentation, nonetheless it still took me a few reads to really grasp what is required.

Ideally, this would no longer be needed and surfaceIDs could even be removed.
However, it is currently used to link the name of the surfaces to the file to/ from which data should be written/read: e.g. CSM_Time1Surface0Output.dat.

Changing it requires a modification to USRInit.f and USR.f. It's the same code that has to be changed, where the integer following MOVINGSURFACE is detected to know what the file where the load points are stored (USRInit.f) or loads are read from (USR.f) is called. It occurs twice in each USR file.

See for example:

R = INDEX(SNAME,'SURFACE')
READ(SNAME((R+7):LEN(TRIM(SNAME))),'(I)') R
R = R+1

Here SNAME is provided by the DLOAD or UTRACLOAD subroutine and corresponds to the name of the surface in Abaqus itself. Currently this has to be MOVINGSURFACE and is determined with this code.

It could be opted for to substitute a list of names of surfaces, as is now done in GetOutput.cpp and use that list to match the SNAME and as such determine the file to be read. The names could be inferred from the interfaces, and as such the surfaceIDs parameter could be completely omitted. As such the order will not matter anymore (as long as it's the same as for the other solver-wrapper).

We could even change the filenames of the .dat files (containing load points, loads, output nodes, displacements) not to contain the integer anymore but the name of the ModelPart instead. Or put them in directories named like these ModelParts. But that's more on a higher level of design choices for the code, as we ideally do something similar for each solver wrapper.

Markdown files and documentation structure

I think it may be useful to put a .md file in the coupling_components folder. This would be the highest level in the documentation about the use of CoCoNuT and is typically the place were the framework is entered (via analysis.py). Some general explanation can be given here, followed by some specific things. I think this would be a more logical "front page" of the documentation website, now the "CouplingComponents" tab opens the page of "CoupledSolvers".

I would explain how the analysis.py file works, because this is typically the script someone will use to start a simulation. A bit of explanation about tools.py, component.py and interface.py could also be useful.

Furthermore this is the logical location to explain the number_of_timesteps setting. Its meaning is of course self-evident, but for a new user it is not obvious what its value should be for a steady simulation.

Maybe you find this an overkill, it's just a suggestion. The page should not necessarily be very long either.

Run individual solver wrappers tests from their files

Currently, it is not possible to run an individual solver wrapper tests (for Fluent and Abaqus) from the file itself, because the paths are configured to run from the tests directory. Nonetheless, this would be a nice option. By modifying some paths this will be possible.

These Python commands are useful in that regard:
os.path.dirname(__file__) this method returns a string value which represents the directory name from the specified path
os.path.realpath() eliminating any symbolic links encountered in the path
os.getcwd() this method returns current working directory of a process

Alternatively, this command can be used os.chdir(tmp_example_path) to change the path from where the files are run (similar to cd in the terminal).

Abaqus wrapper: incrementation setting

A part of the code where many errors originate from is the write_start_and_restart_inp method. This method reads an Abaqus input file (.inp) and creates two new input files: one called CSM_timeX for the first time step X+1 and one called CSM_Restart for all subsequent time steps.

In the original input file, the incrementation settings are substituted using the settings in the JSON-file (delta_t and if subcycling is enabled also minInc, initialInc, maxNumInc and maxInc). It does this substitution by looking for keywords in the input file. The incrementation information is searched for by searching for the line containing the word *dynamic or *static. In case of *dynamic (unsteady cases), it also looks for the word application on the same line, checking the type. There some issues arise:

  1. Currently application is only allowed to be quasi-static or moderate dissipation, for anything else an NotImplementedError is raised (in 614.py).
  2. Abaqus users who prefer the GUI and use default settings in their step definition end up with an input file not containing the word application. The code can currently not deal with that and raises an error that is hard to interpret.
  3. This makes the code rather rigid, Abaqus users who want to use a different application have to adapt this part of the code to accept their input files as well.

I think we should discuss what is a desirable solution to make the code less rigid but more user-friendly. I think of following options:

  1. Keep it, but make it robust against the second issue (e.g. by assuming default values) or mention in the documentation that no default value may be used for the application. It is also possible to omit the check of the application. Then the third issue is still there, which we should either accept or try to implement all possible Abaqus set-ups (which is hard to predict and makes the code very long).
  2. Keep the substitution of the increments, but omit the part where it checks the application.
  3. Omit the substitution of the increments. The user has the responsibility to use settings that are compatible with CoCoNuT (thus writing output at the correct times and being able to converge of course). This is mabye more elegant and versatile. If this solution is chosen, we could still try to extract the incrementation settings from the input file and compare them to the delta_t setting (which is used by the Fluent wrapper), just to raise a warning when there is a difference. Information on subcycling would then probably be omitted from the JSON-file, because it makes it longer and would also just serve as a check. Another question is then if we should apply this philosophy to other wrappers as well, becaues then we have a delta_t setting used by some wrappers but not all of them.

I would like to have a discussion about it before deciding on a solution, maybe there are more ideas to solve this or you all have a strong preference for one of the solutions.

Convergence_criterion structure

Multiple convergence criteria are now specified using convergence_criteria.or or convergence_criteria.and. The key settings refers to a dictionary of the different criteria with used keys convergence_criterion0, convergence_criterion1, etc. :

 "convergence_criterion" :
        {
            "type" : "convergence_criteria.or",
            "settings" :
            {
                "convergence_criterion0" :
                {
                    "type": "convergence_criteria.iteration_limit",
                    "settings":
                    {
                        "maximum": 20
                    }
                },
                "convergence_criterion1" :
                {
                    "type" : "convergence_criteria.relative_norm",
                    "settings" :
                    {
                        "tolerance" : 1e-6,
                        "order" : 2
                    }
                }
            }
        }

A minor adjustment, is to change the dictionary corresponding to the key settings to a list, in the same way as is done with the solver wrappers and mappers. In my opinion that would increase the uniformity and the effort is minimal.

 "convergence_criterion" :
        {
            "type" : "convergence_criteria.or",
            "settings" :
            [
                {
                    "type": "convergence_criteria.iteration_limit",
                    "settings":
                    {
                        "maximum": 20
                    }
                },
                {
                    "type" : "convergence_criteria.relative_norm",
                    "settings" :
                    {
                        "tolerance" : 1e-6,
                        "order" : 2
                    }
                }
            ]
        }

Unit test Abaqus wrapper

The unit tests of the Abaqus wrapper don't work anymore. They got broken somewhere in the past, but because they weren't part of the test suites, this went unnoticed.

This issue requires fixing the unit tests and adding them to the test suites.

Automatic run of unittests and examples

On our local system, we should have a system-specific script to automatically run all the unittests and test_examples every day/week/month.
@nicolasdlss already has a script like this for the test_examples, so we can start from that.
Using a cronjob in Linux, the script can be run automatically at certain time intervals.

Equal method for Interfaces and ModelParts

Currently the equal method __eq___(self, other) of the Interface class checks equaltiy of model_part_variable_pairs and data.
It can be more useful to only check for the equality of model_part_variable_pairs. Additionally, it can be checked that the same model is referenced.
This can be useful in solver_wrappers and mappers.

Clean up files after running tests

While running the tests, files are created (specifically for the solver wrappers.
It might be cleaner to remove these files if the corresponding test has succeeded.
If the test fails, the files should remain for debugging purposes.

Time discretization in Python solvers

The difference in time discretization in fluid and structure solver leads to unphysical pressure jumps in time. (J. Vierendeels, K. Dumont, E. Dick, and P.R. Verdonck. Analysis and stabilization of uid-structure interaction algorithm for rigid-body motion. AIAA Journal, 43(12):2549{2557, 2005.)
To avoid this problem, backward Euler time discretization can also applied for the structure.

Improving restart

Currently when restart is used, there is no derivative information (no modes from previous iterations for iqn-ils f.ex.), but more importantly, as initial guess zero deformation is used.

This makes the restart function useless in many practical cases with high added-mass, because the initial guess is too far from the actual solution (leading to a mode in iqn-ils, which is too 'wrong' to stabilize the coupling, resulting in divergence in the 3rd iteration). For example, performing five time steps for the tube_fluent2d_abaqus2d and then performing a restart will not work when using iqn-ils (fyi Aitken relaxation works but requires about 40 iterations per time step).

A simple and effective solution is to use the solution from the previous time step as initial guess. The implementation is straightforward, because the files from a previous time step are already there and the code to extract this displacement is there as well.

Concretely, I suggest that for a restart, all solver wrappers store the values from the previous time step in the interface upon initialization.

Add polynomials to radial basis mapping

This issue concerns a possible extension/improvement of the current RBF mapper.
By combing radial basis interpolation with polynomial interpolation, it should be possible to exactly recover rigid body motions.
This paper gives more information about this technique.

Convert Gambit meshes to another format

The meshes in the Fluent tests and test examples are generated from Gambit journals. These journals must be converted (or rather rewritten) as ICEM replay files, as ICEM is included in ANSYS, while Gambit has been discontinued in 2007.
Specifically, this concerns two meshes: the 2D tube and the 3D tube.

ps: I have some ICEM replay scripts available to start from

Copy in get_interface_input() and get_interface_output() of solver_wrappers

The basic philosophy with respect to copying was that a component has to action if it doesn't want a interface to change.
Nevertheless, with the get methods, a copy in actually always needed or at least very advisable. Therefore, I would like to introduce a copy in the get methods of the solver wrappers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.