Git Product home page Git Product logo

thetis's People

Contributors

acse-ej321 avatar connorjward avatar cpjordan avatar dham avatar dorugeber avatar jdbetteridge avatar jhill1 avatar jwallwork23 avatar matt-piggott avatar mc21hm avatar mc4117 avatar pbrubeck avatar stephankramer avatar swarder avatar taupalosaurus avatar thangel avatar tkarna avatar tpkarna-cmop avatar wei-pan avatar wence- avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

thetis's Issues

Viz output

The new firedrake viz output format supports multiple fields in the same file (they are all interpolated/projected into the same space). I note that at the moment each field is dumped in its own file. This is partly because that's all that's supported for viz up to now. Is it better to put everything in one file?

Zenodo release request

Is it still the case that Zenodo requests are done manually for Thetis?

I'd like a Zenodo release for commit "424173012fbb6f60a85f544c560a18a061d5a5cb"

Thanks,

Simon

VTK output for fields on continuous functionspaces is broken

It seems that for any field that is defined on a continuous function space the corresponding vtk output produces broken vtu files. I think this is due to the self-proclaimed hack (line 123 of thetis/exporter.py) to avoid storing a separate dg coordinates function in every File object. If I comment out that line, things seem to work again. This is presumably because vtu files with only continuous fields do not have a discontinuous mesh. Is the fix to just add a and not is_cg(function) ?

Reproducer:

from thetis import *
from thetis.exporter import ExportManager
  
mesh = UnitSquareMesh(2,2)

function_spaces = [
        FunctionSpace(mesh, "DG", 0, name="P0"),
        FunctionSpace(mesh, "CG", 1, name="P1"),
        FunctionSpace(mesh, "CG", 2, name="P2"),
        FunctionSpace(mesh, "DG", 1, name="P1DG"),
        FunctionSpace(mesh, "DG", 2, name="P2DG")]

fields = {}
field_meta_data = {}
for fs in function_spaces:
    name = fs.name + "_field"
    field = Function(fs, name=name)
    fields[name] = field
    field_meta_data[name] = {'shortname': name, 'filename': name}

em = ExportManager('outputs', fields.keys(), fields, field_meta_data, export_type='vtk')
em.export()

Thetis adjoint tests are broken

There are two failures (see #183 for Jenkins errors):

  • test_adjoint/test_swe_adjoint.py::test_gradient_from_adjoint[setup_steady] fails with a linear solver convergence error.
  • test_adjoint/examples/test_examples.py::test_examples[tidalfarm.py] also fails to converge. In this case we can see that it is because the functional is uniformly zero.

Better testing for time integrators

Thetis implements a bunch of different time integrator schemes, but currently these are not tested properly.

We could at least solve an ODE with them (using a small mesh and a solution that is constant in space) and verify correct convergence rate.

Bathymetry being altered in assign_initial_conditions

Hi,

Odd problem that seems to occur on one of my simulations. I read in a netcdf file to create the bathymetry/topography. When I look at the output the bathymetry looks ok, except it seems to be clipped at 27m depth, despite the topography going to 40m+ (i.e. -40m). It otherwise looks OK (not skewed of "off" from my mesh).

So I then printed out the minimum in the bathymetry.data.dat (i.e. the highest land) variable at various stages of the simulation and get the following output:

Reading bathymetry/topography data from inputs/shetlands_palaeo_bathy_100m.nc
Interpolating bathymetry/topography data onto simulation mesh
Bathymetry/topography min:-196.91967167572423
Bathymetry/topography max:129.63601321824117
-196.91967167572423
before init
-196.91967167572423
before init cond
-196.91967167572423
dt = 4.0
Using time integrator: CrankNicolson
after init cond
26.610915552352306
before start
26.610915552352306
    0     0 T=      0.00 eta norm:     0.1133 u norm:     0.1133  0.00
volume2d rel. error  0.0000e+00
Exporting: uv_2d elev_2d 
Exporting: 
time:  4.0
26.610915552352306

The lines of code are:

print("before init")
print(min(bathymetry2d.dat.data))
update_forcings(0.0)
# set initial condition for elevation, piecewise linear function
print("before init cond")
print(min(bathymetry2d.dat.data))
solverObj.assign_initial_conditions(uv=Constant((1.0e-6,0.0)), elev=Constant(1.0e-6))
print("after init cond")
print(min(bathymetry2d.dat.data))
#uv, elev = solverObj.timestepper.solution.split()
# Run model
print("before start")
print(min(bathymetry2d.dat.data))

You can see the bathymetry gets altered after assign_initial_conditions. This has been driving me mad for a couple of days now. I've dug around the code and can't see anything obvious. Any ideas from someone who knows the code better?

Zenodo Release of Thetis

Currently releasing a version of Thetis through zenodo is a manual process. It would be nice if this could be done automatically, the same way as already occurs using firedrake-zenodo with firedrake.

Portable hdf5/netcdf outputs

We need a portable binary output format for sharing model results. Presumably hdf5 or netcdf are the the preferred formats. As Thetis may use some esoteric function spaces, I guess we'll need to convert the outputs to P0, P1, or P1DG before exporting.

Memory consumption

Memory consumption seems to grow boundlessly with large simulations. Here's an example of the idealized estuary example (turbulence branch):

$pidstat -h -r -p 18846 30
Linux 4.2.0-35-generic (hyakutake)      04/09/2016      _x86_64_        (4 CPU)

#      Time   UID       PID  minflt/s  majflt/s     VSZ    RSS   %MEM  Command
 1460219513  1000     18846    514.13      0.00 2312504 1374720   8.42  python
 1460219543  1000     18846    368.17      0.07 2534136 1592508   9.75  python
 1460219573  1000     18846    399.67      0.00 2787808 1846216  11.30  python
 1460219603  1000     18846    310.50      0.00 2970008 2028532  12.42  python
 1460219633  1000     18846    183.33      0.00 3077760 2136284  13.08  python
 1460219663  1000     18846      0.00      0.00 3077760 2136284  13.08  python
 1460219693  1000     18846      1.30      0.00 3077760 2136388  13.08  python
 1460219723  1000     18846      0.00      0.00 3077760 2136388  13.08  python
 1460219753  1000     18846      8.73      0.00 3107268 2165872  13.26  python
 1460219783  1000     18846     98.73      0.00 3262180 2320784  14.21  python
 1460219813  1000     18846    306.13      0.00 3469808 2528008  15.47  python
 1460219843  1000     18846    363.87      0.00 3715588 2773976  16.98  python
 1460219873  1000     18846    338.30      0.00 3903300 2961496  18.13  python
 1460219903  1000     18846    380.43      0.00 4152840 3210848  19.65  python
 1460219933  1000     18846    298.73      0.00 4333568 3391808  20.76  python
 1460219963  1000     18846    318.77      0.00 4555468 3613784  22.12  python
 1460219993  1000     18846    354.37      0.00 4772844 3831080  23.45  python
 1460220023  1000     18846    276.20      0.00 4951260 4009200  24.54  python
 1460220053  1000     18846    353.80      0.00 5171488 4229532  25.89  python
 1460220083  1000     18846    365.97      0.00 5356540 4414732  27.02  python
 1460220113  1000     18846    258.50      0.00 5567644 4625752  28.32  python
 1460220143  1000     18846    386.77      0.00 5787396 4845760  29.66  python
 1460220173  1000     18846    383.00      0.00 6004996 5063300  30.99  python
 1460220203  1000     18846    345.73      0.00 6223184 5280888  32.33  python
 1460220233  1000     18846    280.57      0.00 6403664 5461848  33.43  python

If I manually invoke garbage collector (e.g. at the end of each export) there's no increase:

$pidstat -h -r -p 21984 30
Linux 4.2.0-35-generic (hyakutake)      04/09/2016      _x86_64_        (4 CPU)

#      Time   UID       PID  minflt/s  majflt/s     VSZ    RSS   %MEM  Command
 1460221257  1000     21984   3064.67      0.00 2137176 1200828   7.35  python
 1460221287  1000     21984    404.30      0.00 2451024 1514800   9.27  python
 1460221317  1000     21984    110.47      0.00 2498376 1558064   9.54  python
 1460221347  1000     21984      0.00      0.00 2498376 1558064   9.54  python
 1460221377  1000     21984     46.37      0.00 2498220 1558212   9.54  python
 1460221407  1000     21984      0.17      0.00 2498220 1558212   9.54  python
 1460221437  1000     21984     23.37      0.00 2497416 1557428   9.53  python
 1460221467  1000     21984      0.17      0.00 2497416 1557428   9.53  python
 1460221497  1000     21984     30.23      0.00 2497416 1557468   9.53  python
 1460221527  1000     21984      0.00      0.00 2497416 1557468   9.53  python
 1460221557  1000     21984     35.63      0.00 2497416 1557468   9.53  python
 1460221587  1000     21984      0.00      0.00 2497416 1557468   9.53  python
 1460221617  1000     21984     56.87      0.00 2537072 1596768   9.77  python
 1460221647  1000     21984     35.97      0.00 2497416 1557468   9.53  python
 1460221677  1000     21984     11.33      0.00 2527436 1587444   9.72  python
 1460221707  1000     21984     23.83      0.00 2497416 1557468   9.53  python
 1460221737  1000     21984      0.00      0.00 2497416 1557468   9.53  python
 1460221767  1000     21984     35.13      0.00 2497416 1557468   9.53  python
 1460221797  1000     21984      0.00      0.00 2497416 1557468   9.53  python
 1460221827  1000     21984     35.17      0.00 2497416 1557468   9.53  python
 1460221857  1000     21984      0.00      0.00 2497416 1557468   9.53  python
 1460221887  1000     21984      0.00      0.00 2497416 1557468   9.53  python

If system memory starts to run out, garbage collector seems to be called automatically, but you'll have to wait until then.

Why is it necessary to call the garbage collector manually?

And even so, why is there so much memory being allocated at run time? I thought that once you've created the necessary functions, there should be no need to allocate for more.

Support for user defined callback functions

In many cases users want to compute custom diagnostics of simulation state at run time. It would be useful to have a generic method for supporting this.

I think all the existing callbacks (volume and mass conservation etc) could then be ported to the same interface.

Default model options are global

It appears that the default model options are in global scope:

from thetis import *
import thetis.options

a = thetis.options.ModelOptions2d()
a.timestepper_options.solver_parameters = {'for': 'bar'}

print('a: {:}'.format(a.timestepper_options.solver_parameters))

b = thetis.options.ModelOptions2d()
print('b: {:}'.format(b.timestepper_options.solver_parameters))

Prints

a: {'foo': 'bar'}
b: {'foo': 'bar'}

Probably not what we want. At least this affects the test suite.

I think this is because we define the default options as trait objects in attach_paired_options decorator, rather than a class that should be initiated separately for each instance.

Strong residuals

In order to do dual weighted residual (DWR) error estimation in Thetis, we need strong residuals. In the (closed) PR https://github.com/thetisproject/thetis/pull/146, I started working on this by defining strong residuals for each term in the shallow water and 2d tracer equations. However, the code structure needed some major changes. For error estimation, the idea was to use callbacks to periodically output strong residual fields, which are then weighted by adjoint solutions to give the DWR estimator.

The strong residual is also needed for SUPG stabilisation. Suppose we have the strong residual R for the steady tracer equation. Then SUPG amounts to adding a stabilisation term
tau * dot(u, grad(psi)) * R * dx
to the weak form, where tau is the stabilisation parameter, u is the fluid velocity and psi is the test function used. Instead of having a separate term for SUPG, perhaps it would make sense to simply add
tau * dot(u, grad(psi)) * R_T * dx
to the weak residual in each term T, where R_T is the corresponding term in the strong residual.

To do list:

  • 2d SUPG
  • 3d SUPG
  • Strong residuals for tracer_eq_2d.
  • Strong residuals for tracer_eq.
  • Strong residuals for shallowwater_eq.
  • Strong residuals for momentum_eq.
  • Testing for these.
  • Callbacks to output residual quantities as fields.

Also for DWR we need edge residuals to take into account fluxes:

  • Edge residuals to implement SUPG in tracer_eq_2d.
  • Edge residuals to implement SUPG in tracer_eq.
  • Edge residuals for shallowwater_eq.
  • Edge residuals for momentum_eq.
  • Testing for these.

Specific release of Thetis

Hi

I was wondering if it would be possible to do a release of Thetis at the following commit id "bab2234b4f84de6c3e5244ca0e466ec15a62d991".

Thanks for your help!

Design document for z-layers

Adding support for z-layers

In addition to the current support for sigma coordinates, we would like to be able to use z-layers. This is important where we have shallow bathymetry and would like to exploit this to reduce the number of degrees of freedom we're using. Also probably useful for wetting and drying.

The implementation will have cross-cutting implications across both Firedrake and PyOP2, but the aim here is to iron out the user-facing interface (and hence the necessary abstractions) which will guide the implementation.

Variable layer columns

Currently, Firedrake only supports a single global number of extruded layers across the whole domain. The first step is providing the ability to have a variable number of columns in the domain. Proposed API:

base = Mesh(...)
start = MeshFunction(base, dtype=int)
stop = MeshFunction(base, dtype=int)
# The convenience version that just has a globally constant layer number remains.
mesh = ExtrudedMesh(base, start_layer=start, stop_layer=stop)

Supporting this should be reasonably straightforward. Note that we do not expect to lose performance by making the number of layers per-column. The performance advantage of extruded meshes comes from the data layout, not that we have hard-coded one of the loop bounds.

The coordinate field

We would like not to have internal vertical facets that are "hanging".

+---+---+---+
|   |   |   |
+---+---+---+
|   |   |   |
+---+---+---+
|   | <- This causes a problem
+---+

The reason for this is that we don't want to have to treat these specially in the code (all the logic becomes much more complicated). The proposed solution is to create degenerate cells where there is a step in the number of layers, squashing the volume of the hanging facet to zero.

+---+---+---+
|   |   |   |
+---+---+---+
|   |   |   |
+---+---+---+
|  /
+/

We will need to be a little careful here, because now all the terms in the Jacobian will be zero, and if we're don't take care, computations of J/detJ on the facet will create a NaN.

Integrating on the vertical facets

I think the model is that "top" layer number is the same in all columns, and so the data model encodes the "start" layer in each column. Since hanging vertical facets are squashed to zero, and therefore do not contribute non-zero values at any point, when we integrate over the vertical facets we have to inspect the number of layers in both of the columns and choose the starting layer corresponding to the shortest column.

element_continuity wrong

In updating to the new Firedrake api (finat, not fiat elements). I fixed element_continuity. In doing so, I realised that there was a bug in its definition.

def element_continuity(fiat_element):
    """Return an :class:`ElementContinuity` instance with the
    continuity of a given element.

    :arg fiat_element: The fiat element to determine the continuity
        of.
    :returns: A new :class:`ElementContinuity` instance.
    """
    import FIAT
    cell = fiat_element.get_reference_element()
...
        edofs = fiat_element.entity_dofs()
        dim = cell.get_spatial_dimension()
        dg = True
        for i in range(dim - 1):
            if any(len(k) for k in edofs[i].values()):
                dg = False
                break
        return ElementContinuity(dg, dg, dg)

This incorrectly reports elements with edge dofs, but not vertex dofs as dg (oops!).

Fixing this, leads to a bunch of tests failing. So I wonder what's gone wrong. (this fix is to loop:

for i in range(dim), not for i in range(dim-1).

The api-changes branch does this using the new finat elements. But fails tests because it's not bug-for-bug compatible.

TimeIntegrator API update

I think we could simplify the time integrator API a little:

  1. Omit solution function from TimeIntegrator.advance call

    Currently TimeIntegrator.advance takes the solution function as an argument. This is redundant as in practice TimeIntegrator needs to store the solution function as a property in any case. It's also difficult to ensure meaningful operation in case the user happens to pass a different function to the advance method.

  2. Omit dt from TimeIntegrator.advance + add set_dt method

    We normally use a constant time step so there's no need to pass the time step to every call. In case the user wants to change the time step, I would add set_dt method instead. Then one could make sure that the time step gets updated correctly. In cases where changing time step is not allowed (e.g. linear multistep methods) we'd raise an exception.

Here's a draft of the base classes

class TimeIntegratorBase(object):
    """Base class for all time integrator objects."""
    __metaclass__ = ABCMeta

    @abstractproperty
    def advance(self, t, update_forcings=None):
        """Advances equations for one time step."""
        pass

    @abstractproperty
    def initialize(self, *args, **kwargs):
        """Initializes the time integrator"""
        pass


class TimeIntegrator(TimeIntegratorBase):
    """Base class for all time integrator objects that solve a single equation"""
    def __init__(self, equation, solution, fields, dt, solver_parameters={}):
        super(TimeIntegrator, self).__init__()

        self.equation = equation
        self.solution = solution
        self.fields = fields
        self.dt_const = Constant(dt)

        # unique identifier for solver
        self.name = '-'.join([self.__class__.__name__,
                              self.equation.__class__.__name__])
        self.solver_parameters = {}
        self.solver_parameters.update(solver_parameters)

    def set_dt(self, dt):
        """Updates time step"""
        self.dt_const.assign(dt)

Here I'm separating TimeIntegratorBase and TimeIntegrator classes: All time integrators are derived from the former (e.g. 3D coupled time integrators) while all "normal" time integrators (that iterate a single equation) inherit the latter.

Here I haven't changed the signature for update_forcings function which is still passed to the advance method. There might be a better way of handling that too.

Options should support more function spaces

Currently function spaces are chosen with the boolean options.mimetic: False corresponds to P(n)DG-P(n)DG space and True to RT(n+1)-P(n)DG pair.
#11 adds an additional options.continuous_pressure that provides P(n)DG-P(n+1) pair.

All cases use options.order as the base degree (n).

This needs to be generalized to support more function spaces. Easy way is to define names for each combination, for example

options.element_family = 'dg-dg'  # p(n)dg-p(n)dg
options.element_family = 'rt-dg'  # rt(n+1)dg-p(n)dg
options.element_family = 'dg-cg'  # p(n)dg-p(n+1)

The above names do not contain the degrees as sane combinations are presumably dictated by the stability constraints of shallow water equations. (Are there exceptions?)
Alternatively we could use human readable names (if such exist)?

Negative tracer values

Apparently a few people are looking for a steady state passive tracer example in Thetis. So I thought I would code up the TELEMAC-2D 'point discharge with diffusion' test case which I considered using a standalone SUPG P1 solver in this paper: https://zenodo.org/record/3653373#.Xlkdaumnw5k

I made the new branch passive-tracer-example, where there is a coupled 'SteadyState' solver. If I run examples/passiveTracer/passiveTracer.py, it gives a very encouraging plot:

negative_tracer

Visually, it agrees nicely with the asymptotic solution and SUPG approximation as plotted in the paper (Figure 1, p.92). However, the values are significantly negative! Does anyone have an idea what is going wrong?

Lateral boundaries in slope limiter

#29 fixes the top/bottom conditions for p1dg slope limiters but lateral boundaries are still broken. That is, the elements at boundaries only see the interior neighbors which often leads to too strong limiting.

colorama conflicts with pytest

When running some of the tests I get the following error:

py.test test/operations/ 
============================================================================================== test session starts ===============================================================================================
platform linux2 -- Python 2.7.9, pytest-2.8.2, py-1.4.30, pluggy-0.3.1
rootdir: /data/lmitche1/models/cofs, inifile: 
plugins: pep8-1.0.6, ipdb-0.1.dev2, xdist-1.11, benchmark-2.4.1, cov-1.8.1
collected 32 items 

test/operations/test_operations_2d-3d.py ................................

================================================================================== 32 passed, 4 pytest-warnings in 2.81 seconds ==================================================================================
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "/homes/lmitche1/.local/lib/python2.7/site-packages/colorama/initialise.py", line 19, in reset_all
    AnsiToWin32(orig_stdout).reset_all()
  File "/homes/lmitche1/.local/lib/python2.7/site-packages/colorama/ansitowin32.py", line 67, in __init__
    strip = conversion_supported or not is_a_tty(wrapped)
  File "/homes/lmitche1/.local/lib/python2.7/site-packages/colorama/ansitowin32.py", line 17, in is_a_tty
    return hasattr(stream, 'isatty') and stream.isatty()
ValueError: I/O operation on closed file
Error in sys.exitfunc:
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/usr/lib/python2.7/atexit.pyc in _run_exitfuncs()
     22         func, targs, kargs = _exithandlers.pop()
     23         try:
---> 24             func(*targs, **kargs)
     25         except SystemExit:
     26             exc_info = sys.exc_info()

/homes/lmitche1/.local/lib/python2.7/site-packages/colorama/initialise.pyc in reset_all()
     17 
     18 def reset_all():
---> 19     AnsiToWin32(orig_stdout).reset_all()
     20 
     21 

/homes/lmitche1/.local/lib/python2.7/site-packages/colorama/ansitowin32.pyc in __init__(self, wrapped, convert, strip, autoreset)
     65         # should we strip ANSI sequences from our output?
     66         if strip is None:
---> 67             strip = conversion_supported or not is_a_tty(wrapped)
     68         self.strip = strip
     69 

/homes/lmitche1/.local/lib/python2.7/site-packages/colorama/ansitowin32.pyc in is_a_tty(stream)
     15 
     16 def is_a_tty(stream):
---> 17     return hasattr(stream, 'isatty') and stream.isatty()
     18 
     19 

ValueError: I/O operation on closed file

This appears to be because in this mode pytest has already closed stdout before colorama gets to it in atexit. colorama.init is claimed to be a no-op on all platforms but windows. But this is not the case, because it registers an atexit handler in all cases. I think you're only using colorama for symbolic ansi escape sequences. So I think one can get around this problem by just not calling colorama.init. We're unlikely to run on windows any time soon.

AssertionError while trying to run Exmaple from thetis website

Hear is the Code
##########################################
from thetis import *
mesh2d = RectangleMesh(20, 20, 10e3, 10e3)

fs_p1 = FunctionSpace(mesh2d, 'CG', 1)
bathymetry_2d = Function(fs_p1, name='Bathymetry').assign(10.0)

solver_obj = solver.FlowSolver(mesh2d, bathymetry_2d, n_layers=6)
options = solver_obj.options
options.element_family = 'dg-dg'
options.polynomial_degree = 1
options.timestepper_type = 'SSPRK22'
options.timestepper_options.use_automatic_timestep = False
options.solve_salinity = False
options.solve_temperature = False
options.simulation_export_time = 50.0
options.simulation_end_time = 3600.
options.timestep = 25.0

solver_obj.create_function_spaces()
init_elev = Function(solver_obj.function_spaces.H_2d)
coords = SpatialCoordinate(mesh2d)
init_elev.project(2.0*exp(-((coords[0] - 4e3)**2 + (coords[1] - 4.5e3)2)/2.2e32))
solver_obj.assign_initial_conditions(elev=init_elev)

solver_obj.iterate()
###########################################

the output
What am I doing Wrong
##########################################
/home/qismath/firedrake/lib/python3.6/site-packages/pyproj/crs.py:77: FutureWarning: '+init=:' syntax is deprecated. ':' is the preferred initialization method.
return _prepare_from_string(" ".join(pjargs))
UFL:WARNING Discontinuous Lagrange element requested on triangle * interval, creating DQ element.
Coupled time integrator: CoupledTwoStageRK
2D time integrator: ESDIRKTrapezoid
3D time integrator: SSPRK22ALE
3D implicit time integrator: BackwardEuler
Traceback (most recent call last):
File "test_flow.py", line 23, in
solver_obj.assign_initial_conditions(elev=init_elev)
File "/home/qismath/firedrake/src/thetis/thetis/solver.py", line 877, in assign_initial_conditions
self.create_equations()
File "/home/qismath/firedrake/src/thetis/thetis/solver.py", line 800, in create_equations
elem_height=self.fields.v_elem_size_2d)
File "/home/qismath/firedrake/src/thetis/thetis/utility.py", line 950, in init
assert (len(nodes) == out_nodes)
AssertionError

Redesign equations

Redesign equations

The Equation class as it is implemented now is too rigid for developing custom time integrators. We need a more flexible API where the user can choose which terms are treated implicitly/explicitly/differently.

Equations and terms

Equation class defines all the terms in the equation. It takes a bunch of options and the function space where the solution belongs:

function_space = solver.function_spaces.P1DG
options = {'use_grad_div_viscosity': True}
eq = MyEquation(function_space, **options)

class MyEquation(Equation):
    def __init__(self, function_space, ...)
        self.function_space = function_space
        self.test = TestFunction(self.function_space)
        ...

Options are typically booleans like above.

The terms are implemented as methods that return the corresponding form, including interface terms and boundary conditions

class MyEquation(Equation):
    ...
    def viscosity(self, fields, bnd_conditions):
        """viscosity operator with interior penalty terms."""
        grad_div = self.options.use_grad_div_viscosity
        f = 0
        f += ... self.test * dx
        return f

Sign convention: all terms are assumed to be on the left hand side of the equation A + term = 0.

The terms depend on a fields dictionary

fields={'uv_3d': solver.fields.uv_3d,
        'elev_3d': solver.fields.elev_3d,
        'viscosity': None}

Fields should be Coefficients, e.g. Function or Constant instances, or UFL expressions(?). Some fields are optional, e.g. if viscosity is omitted (or it is None) the corresponding terms will be ignored. (Alternatively we could have option.use_viscosity.) Some fields are required and should be asserted.

The equation object knows all the terms that it implements

class MyEquation(Equation):
    def __init__(...):
        ...
        # add some python magic to do this
        self.terms = {}
        self.terms['viscosity'] = self.viscosity
        self.terms['elevation_gradient'] = self.elevation_gradient

To differentiate how the terms should be treated, the user can add labels:

eq.set_label('implicit', ('viscosity', 'elevation_gradient'))
eq.set_label('explicit', 'advection')
eq.set_label('source', 'wind_stress')

The equation class provides a method for evaluating the forms

f_implicit = eq.get_form('implicit', fields)
# returns eq.viscosity(fields) + eq.elevation_gradient(fields)

f_explicit = eq.get_form('explicit', fields)
# returns eq.advection(fields)

Time integrator

Time integrator then takes the equation, fields and labels as an argument

labels = {'implicit': ('viscosity', 'elevation_gradient'),
          'explicit': 'advection'}
ti = MyTimeIntegrator(eq, fields, labels, bnd_conditions)

The labels depend on the time integrator, e.g. backward Euler only has implicit terms.
Using the Equation.get_form method the time integrator can then construct the solvers it needs:

class MyTimeIntegrator(Equation):
    def __init__(self, equation, fields, labels, bnd_conditions):
        self.equation = equation
        ...
        mass_term = ...
        f_k1 = mass_term + self.dt*self.equation.get_form('implicit', fields_k1)
        self.problem_k1 = NonlinearVariationalProblem(f_k1, self.solution_k1)
        self.solver_k1 = NonlinearVariationalSolver(self.problem_k1)

    def set_timestep(self, dt):
        self.dt.assign(dt)

    def advance(self, time, update_forcings=None):
        ...

NOTE: Currently the equation object knows the fields it uses and also the solution function. These however may depend on the time integrator so it's better to define them here.

Boundary conditions

Boundary conditions are defined in a dictionary that define which fields should be replaced at the boundary (for weak boundary conditions). There can be additional boundary specific options.

ocean_bnd_id = 1
river_bnd_id = 2
ocean_uv = Constant((1, 0, 0))
river_uv = Constant((0.05, 0, 0))
ocean_funcs = {'uv_3d': ocean_uv}
river_funcs = {'uv_3d': river_uv, 'clamped': True}
bnd_advection = {ocean_bnd_id: ocean_funcs,
                 river_bnd_id: river_funcs}

# apply to term
f_adv = eq.advection(fields, bnd_advection)

# or time integrator
bcs = {'advection': bnd_advection}
ti = MyTimeIntegrator(eq, fields, labels, bnd_conditions)

It should be possible to support Dirichlet boundary conditions in a similar fashion,

ocean_funcs = DirichletBC(...)

this just will be passed to the solver instead of the term.

Naming convention

For the sake of clarity the field names should follow field_metadata (defined in field_defs.py ) as closely as possible. Likewise, options should be similar to options.py where applicable.

Shallow water solver is broken in 3D

With recent firedrake updates, the 3D test cases fail if there's 2D-3D coupling with non-zero coupling term.

For example, the lockExchange example manifests this error:

$ python lockExchange.py -r coarse
UFL:WARNING Discontinuous Lagrange element requested on triangle * interval, creating DQ element.
Coupled time integrator: CoupledTwoStageRK
  2D time integrator: ESDIRKTrapezoid
  3D time integrator: SSPRK22ALE
  3D implicit time integrator: DIRK22
{'ksp_type': 'gmres', 'pc_type': 'fieldsplit', 'pc_fieldsplit_type': 'multiplicative'}
2D mesh: 99 nodes, 128 triangles
3D mesh: 10 layers, 1280 prisms
Horizontal element size: 1414.21 ... 1414.21 m
Vertical element size: 2.000 ... 2.000 m
Element family: dg-dg, degree: 1
Number of tracer DOFs: 7680
Number of cores: 1
Tracer DOFs per core: ~7680.0
  - dt 2d swe: 4.711801835659298
  - dt h. advection: 70.71067639983731
  - dt v. advection: 20.83333333333321
  - dt viscosity: 19.999999027690038
  - CFL adjusted dt: 2D: inf 3D: 19.999999027690038
  - chosen dt: 2D: 19.999999027690038 3D: 19.999999027690038
  - adjusted dt: 2D: 19.565217391304348 3D: 19.565217391304348
dt = 19.565217
max h diff 851.8518104386496 - 851.8518896211225
Running lock exchange problem with options:
Resolution: coarse
Reynolds number: 1.0
Use slope limiters: True
Horizontal viscosity: 1000.0
Lax-Friedrichs factor vel: 0.0
Lax-Friedrichs factor trc: 0.0
Exporting to outputs_coarse_dg-dg_tri_p1_visc-const_Re1.0_lf-vel0.0_lf-trc0.0_lim
Elem size: 1414.2135279967463 1414.2135937246817
    0     0 T=      0.00 eta norm:     0.0000 u norm:     0.0000  0.00
temp_3d mass rel. error  0.0000e+00
temp_3d overshoot 0 0
volume2d rel. error  0.0000e+00
volume3d rel. error  0.0000e+00
Traceback (most recent call last):
  File "PETSc/petscdmshell.pxi", line 322, in petsc4py.PETSc.DMSHELL_CreateFieldDecomposition
  File "/home/karna/temp/firedrake/src/firedrake/firedrake/dmhooks.py", line 367, in create_field_decomposition
    call_setup=True)
  File "/home/karna/temp/firedrake/src/firedrake/firedrake/dmhooks.py", line 172, in add_hook
    raise ValueError("Expecting non-empty stack")
ValueError: Expecting non-empty stack

The above exception was the direct cause of the following exception:

SystemError: <built-in method insert of list object at 0x7fd499423c48> returned a result with an error set
Exception ignored in: 'petsc4py.PETSc.traceback'
SystemError: <built-in method insert of list object at 0x7fd499423c48> returned a result with an error set
Traceback (most recent call last):
  File "lockExchange.py", line 261, in <module>
    parse_options()
  File "lockExchange.py", line 257, in parse_options
    run_lockexchange(**args_dict)
  File "lockExchange.py", line 215, in run_lockexchange
    solver_obj.iterate()
  File "/home/karna/sources/thetis/thetis/solver.py", line 1164, in iterate
    update_forcings, update_forcings3d)
  File "/home/karna/sources/thetis/thetis/coupled_timeintegrator.py", line 692, in advance
    self.timesteppers.swe2d.solve_stage(i_stage, t, update_forcings)
  File "/home/karna/sources/thetis/thetis/rungekutta.py", line 559, in solve_stage
    self.solve_tendency(i_stage, t, update_forcings)
  File "/home/karna/sources/thetis/thetis/rungekutta.py", line 551, in solve_tendency
    self.solver[i_stage].solve()
  File "/home/karna/temp/firedrake/src/firedrake/firedrake/variational_solver.py", line 264, in solve
    self.snes.solve(None, work)
  File "PETSc/SNES.pyx", line 557, in petsc4py.PETSc.SNES.solve
petsc4py.PETSc.Error: error code 77
[0] SNESSolve() line 4437 in /tmp/pip-req-build-zdul6tge/src/snes/interface/snes.c
[0] SNESSolve_NEWTONLS() line 225 in /tmp/pip-req-build-zdul6tge/src/snes/impls/ls/ls.c
[0] KSPSolve() line 707 in /tmp/pip-req-build-zdul6tge/src/ksp/ksp/interface/itfunc.c
[0] KSPSetUp() line 377 in /tmp/pip-req-build-zdul6tge/src/ksp/ksp/interface/itfunc.c
[0] PCSetUp() line 894 in /tmp/pip-req-build-zdul6tge/src/ksp/pc/interface/precon.c
[0] PCSetUp_FieldSplit() line 580 in /tmp/pip-req-build-zdul6tge/src/ksp/pc/impls/fieldsplit/fieldsplit.c
[0] PCFieldSplitSetDefaults() line 537 in /tmp/pip-req-build-zdul6tge/src/ksp/pc/impls/fieldsplit/fieldsplit.c
[0] Petsc has generated inconsistent data
[0] Unhandled case, must have at least two fields, not 0

The error occurs in self.timesteppers.swe2d.solve_stage on the second iteration (iteration 1, RK stage 0) because the coupling term is zero during the first iteration. The coupling term is a source term in the 2D momentum equation (function solver.fields.split_residual_2d).

The error only occurs if fieldsplit is used in the 2D solver. The default solver_parameters are {'ksp_type': 'gmres', 'pc_type': 'fieldsplit', 'pc_fieldsplit_type': 'multiplicative', 'snes_type': 'newtonls'}.

Documentation

We need better documentation of the source code, at least

  • all functions/classes should have docstrings
  • all equations/terms should have clear docstrings with proper math

We should also update the website to contain

  • Example 2D and 3D runs
    • Line-by-line description on how to set up a run
    • Instructions how to visualize results
    • How to store simulation state and continue runs
    • How to set up boundary and initial conditions
    • How to add custom callbacks to compute diagnostics
  • Overview of the solver: governing equations, time integrators
    • Page explaining the 2D solver
    • Page explaining the 3D solver
    • Document Term, Equation, TimeIntegrator abstractions

Issue with running thetis - bug report

Hi all! I am having a very odd issue with firedrake/thetis. I am trying to run the same code which runs normally in my other machine. The same script used to run in this machine until yesterday, so in order to be sure I reinstalled the firedrake environment and thetis since there was no change in the script I am trying to run.

Could you please guide me as to what might have gone wrong in case you came across something similar?
I attach the error message produced. :

bug_report1.txt

In order to simplify this, I went back to the examples and run channel2d. I tried this both in the master branch and the wetting and drying branch I normally use. I added the output in a text file:

channel2d_bug_report.txt

Also, since this is my first bug report, can you please advise if that was clear enough for future reference?

Kind regards,
Thanasis

Issue reading checkpoints back in

Hi,

Ran a tidal model using 'dg-dg' and the 2D SWE. I saved the Velocity2D as h5 files, planning to do some post-processing afterwards.

The post processing code is:

...
V = VectorFunctionSpace(mesh, "DG", 1)
uv = Function(V, name='vel_2d')
print uv.dat.data.shape
for i in range(start_file,int(t_end/t_export)+1):
    print('Reading h5 files. Time ',i,i*t_export)
    checkpoint_file = DumbCheckpoint(file_location + '/Velocity2d_{:05}'.format(i),    mode=FILE_READ)
    checkpoint_file.load(uv)
    checkpoint_file.close()

The model was run on 10 cores, so running my post-processing code on 10 cores also, I get:

Reading Velocity h5 files. Time  10 9000
(163497, 2)
Reading Velocity h5 files. Time  10 9000
Traceback (most recent call last):
  File "post_processing_vel.py", line 56, in <module>
    checkpoint_file.load(uv)
  File "/home/firedrake/src/firedrake/firedrake/checkpointing.py", line 238, in load
    self.vwr.popGroup()
  File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
    self.gen.throw(type, value, traceback)
  File "/home/firedrake/src/PyOP2/pyop2/petsc_base.py", line 347, in vec_context
    yield self._vec
  File "/home/firedrake/src/firedrake/firedrake/checkpointing.py", line 236, in load
    v.load(self.vwr)
  File "PETSc/Vec.pyx", line 445, in petsc4py.PETSc.Vec.load
petsc4py.PETSc.Error: error code 76
[3] VecLoad() line 931 in /tmp/pip-req-build-udnn6xhw/src/vec/vec/interface/vector.c
[3] VecLoad_Default() line 404 in /tmp/pip-req-build-udnn6xhw/src/vec/vec/utils/vecio.c
[3] VecLoad_HDF5() line 264 in /tmp/pip-req-build-udnn6xhw/src/vec/vec/utils/vecio.c
[3] Error in external library
[3] Error in HDF5 call H5Dopen2() Status -1

I'm assuming I've done something dumb and this is not a bug, but any help appreciated.

DIRK time integrators do not work with wetting-drying

Currently test/swe2d/test_thacker.py only tests the CrankNicolson time integrator.

Indeed, all rungekutta.DIRKGeneric based time integrators in the 2D solver (BackwardEuler, DIRK22, DIRK33) are not wetting-drying aware as the do not assemble the modified bathymetry term (ShallowWaterTerm.wd_bathymetry_displacement).

Repository migration

I've renamed the repo to thetis and changed the files accordingly in pr #17. If travis shows green light I think we're good.

get_facet_mask is broken

utility.get_facet_mask appears to be broken, breaking the 3D model. Below is a minimal example:

from firedrake import *
mesh2d = UnitSquareMesh(10, 10)
mesh = ExtrudedMesh(mesh2d, 10, 0.1)
V = FunctionSpace(mesh, 'DG', 1)
from thetis import get_facet_mask
get_facet_mask(V, 'geometric', 'top')
# returns [0, 1, 2, 3, 4, 5]
get_facet_mask(V, 'geometric', 'bottom')
# also returns [0, 1, 2, 3, 4, 5]
# expected: the top/bottom nodes of a prism, e.g. [0, 1, 2] and [3, 4, 5]

Callback support for user defined diagnostic fields (exporting hdf5 and vtk)

It would be useful to have a generic way to export diagnostic fields at run time. For example, as part of some earlier work it became useful to export vorticity at regular intervals (I include the vorticity callback script below).

As it stands, there does not appear to be a standard way to export diagnostic fields like that. One suggestion raised (Tuomas) has been that the user should be able to declare a diagnostic field ( e.g. vorticity_2d) and then a CallBack function that is used to evaluate it. Once registering the field should be accessible as any other field (solver.fields.vorticity_2d) and exported with options.fields_to_export = ['vorticity_2d']

class Vorticity2DCallback(DiagnosticCallback):
    """Callback that calculates the vertical vorticity from a DG velocity field"""


    def __init__(self, solver_obj, time=0, name="vorticity2d", **kwargs):
        """
        :arg solver_obj: Thetis solver object
        :arg time: Starting time
        :arg **kwargs: any additional keyword arguments, see DiagnosticCallback
        """

        super(Vorticity2DCallback,self).__init__(solver_obj)
        kwargs.setdefault('append_to_log', False)
        kwargs.setdefault('export_to_hdf5', False)

        self.outputdir = create_directory(os.path.join(self.solver_obj.options.output_directory,name))
        self.solver_obj = solver_obj
        self.vtk_file = File(os.path.join(self.outputdir,'Vorticity2d.pvd'))
        self.time = time

        V = FunctionSpace(solver_obj.mesh2d, 'CG', 1)

        # Define trial and test fuction
        self.zeta = TrialFunction(V)
        self.psi = TestFunction(V)
        self.vorticity_cg = Function(V, name=name)  # this is where the solution will be stored
        self.a = self.psi * self.zeta * dx
        self.n = FacetNormal(solver_obj.mesh2d)
        self._name = name
        self._variable_names = ["min_max"]

    @property
    def name(self):
        return self._name

    @property
    def variable_names(self):
        return self._variable_names

    def __call__(self):
        time = self.solver_obj.simulation_time + self.time
        uv, elev = self.solver_obj.timestepper.solution.split()

        # omega_z = du/dy - dv/dx
        L = (- Dx(self.psi, 1) * uv[0] + Dx(self.psi, 0) * uv[1]) * dx + \
            (self.psi * uv[0] * self.n[1] - self.psi * uv[1] * self.n[0]) * ds
        solve(self.a == L, self.vorticity_cg, solver_parameters={'ksp_type': 'cg'})

        # Exporting vtk File
        self.vtk_file.write(self.vorticity_cg)

        # Exporting hdf5 File
        chk = DumbCheckpoint(self.outputdir + "/Vorticity_%d" % int(time), mode=FILE_CREATE)
        chk.store(self.vorticity_cg, name="Vorticity_2d")

        omega_min, omega_max = np.min(self.vorticity_cg.dat.data), np.max(self.vorticity_cg.dat.data)
        return [omega_min,omega_max]

    def message_str(self, *args):
        line = 'Exporting vorticity, minimum: {:f} , maximum: {:f}'.format(args[0],args[1])
        return line

import of thetis followed by import thetis_adjoint gives unwanted results

Not sure this is fixable - but it would be good to discuss.

If thetis is imported before thetis_adjoint the exported firedrake object such as solve, Function, etc. are the standard firedrake ones, and not the firedrake_adjoint wrappers, which obviously breaks the adjoint capability. For instance:

$ python -c "from thetis_adjoint import *; print solve.__module__"
dolfin_adjoint.solving
$ python -c "from thetis import *; from thetis_adjoint import *; print solve.__module__"
firedrake.solving

The reason is that if thetis is already imported, the from thetis import * at the last line of thetis_adjoint/init.py just reuses the already loaded thetis module, and thus never executes the from firedrake_adjoint import * in thetis/firedrake.py. Just adding an extra from firedrake_adjoint import * after the from thetis_adjoint import * (or equivalently adding this as the last line of thetis_adjoint/init.py) does not help as the symbols used by thetis itself will still be the original firedrake ones.

One of the things that this breaks is the py.test tests. If I run the adjoint tests (which have from thetis_adjoint import *) individually, they run fine; if I run them together with the other tests they break, as they will have been preceded by tests that have imported thetis.

3D solver is slow

With recent updates, 3D problems run ~6x slower. For example, examples tracerBox and lockExchange are affected. 2D example stommel2d seems to be unaffected.

Update 2D solver time step estimation

If the user provides maximum velocity scale and maximum viscosity, we can compute maximum stable time step based on the discretization and the time integration scheme. This kind of approach is already implemented for the 3D model.

sspimex broken with swe

After changes to timeintegrators and swe equations+terms the channel2d test no longer works. Fixes to update the calls to new API are in fix-swe-sspimex branch but the example still blows up during the run.

Logo?

How about the below as a temporary logo?
screen shot 2016-02-17 at 10 43 43

Vertical velocity solver is broken

Vertical velocity solver appears to be broken. E.g. test continuity3d/test_continuity_mes.py::test_setup1 fails.

There have been no related changes in Thetis recently so it's likely that this is a Firedrake bug.

Travis fails with "No module named scipy"

In #6 travis dies with

==================================== ERRORS ====================================
__________ ERROR collecting test/continuity3d/test_continuity_mes.py ___________
test/continuity3d/test_continuity_mes.py:8: in <module>
    from scipy import stats
E   ImportError: No module named scipy
__________ ERROR collecting test/swe2d/test_steady_state_basin_mms.py __________
test/swe2d/test_steady_state_basin_mms.py:15: in <module>
    from scipy import stats
E   ImportError: No module named scipy
__________ ERROR collecting test/tracerEq/test_steady_adv-diff_mms.py __________
test/tracerEq/test_steady_adv-diff_mms.py:8: in <module>
    from scipy import stats
E   ImportError: No module named scipy
=============== 53 passed, 2 skipped, 3 error in 988.77 seconds ================
The command "py.test --travis -v test" exited with 1.

although python-scipy is in .travis.yml. Any ideas?

Invalid initial expressions in test_steady_state_basin_mms.py

In setup9:

    grad_h_x = '''h0/(2.0*lx*sqrt(0.3*x[0]*x[0] + 0.2*x[1]*x[1]))*0.6*x[0] - 3.0*pi/lx*sin(pi*(3.0*x[0] + 1.0*x[1])/lx)'''
    grad_h_y = '''h0/(2.0*lx*sqrt(0.3*x[0]*x[0] + 0.2*x[1]*x[1]))*0.4*x[1] - 1.0*pi/lx*sin(pi*(3.0*x[0] + 1.0*x[1])/lx)'''

If x is the origin (entirely plausible). Then we end up with:

grad_h_x = h0/(2*lx*sqrt(0))*0.6*0 - ...

=> BOOM.

Issues running tidal_lagoon example

(firedrake) jh1889@rheic ~/work/software/firedrake/src/thetis/examples/tidal_barrage $ python lagoon2d.py
dt = 25
Using time integrator: CrankNicolson
Traceback (most recent call last):
File "lagoon2d.py", line 88, in
number_timesteps=5, name="lagoon_1", export_to_hdf5=True)
TypeError: Can't instantiate abstract class LagoonCallback with abstract methods message_str

Up-to-date firedrake and thetis (updated this morning). Let me know if you want any other info.

Firedrake limiter masks Thetis limiter

Thetis is not happy with the namespaces

In [1]: from thetis import *

In [2]: limiter.__name__
Out[2]: 'firedrake.slope_limiter.limiter'

In [3]: from thetis import limiter

In [4]: limiter.__name__
Out[4]: 'firedrake.slope_limiter.limiter'

How should I import thetis.limiter for this to work?

LinearProblemCache (and temp Function Cache) are somewhat icky

I think I understand why these are here, however, they can cause some slightly strange side effects.

I think the function cache is easier to address. Effectively you're using this for "work vectors". When you want a temporary function, you ask for one given a particular function space.

I think we could add something like this to Firedrake. Here's how I would do it.

The FunctionSpace object gains a get_function(name=None), restore_function(function) pair. You can then get a temporary function that you can use:

f = V.get_function()
... do stuff with f
V.restore_function(f)

We can wrap this up in a context manger so one could do:

with V.work_space() as f:
    ... do stuff with f

This is neater, because if the only reference to the work space is inside V, once V gets collected, so do the temporary functions.

The linear problem cache is a little trickier, however, I think it is still possible to fix. The entries in the problem cache become invalid when the mesh they were built on top of go out of scope. So we could make these collectable by adding a level on top using a WeakKeyDictionary. Sketch:

import weakref
class ProblemCache(object):

    def __init__(self):
        self._cache = weakref.WeakKeyDictionary()

    def __getitem__(self, key):
        mesh, key = key
        return self._cache[mesh][key]

    def __setitem__(self, key, val):
        mesh, key = key
        d = self._cache.get(mesh, {})
        d[key] = val
        self._cache[mesh] = d

    def __contains__(self, key):
        mesh, key = key
        return mesh in self._cache and self._cache[mesh].get(key) is not None

    def clear(self):
        self._cache.clear()

So the idea is to use a weakref to the Mesh as a first-level key, the second level acts as before. Now if the Mesh goes out of scope, the cached problems are no longer strongly referenced by the cache and can be collected.

Memory leak in solver

Running the Columbia plume example on our Linux cluster runs out of memory.

Here's a small test case that reproduces the issue. Mesh file is located in examples/columbia_plume/ example

from thetis import *

meshfile = 'mesh_cre-plume_03_normal.msh'
mesh2d = Mesh(meshfile)
print_output('Loaded mesh ' + mesh2d.name)
# mesh2d = RectangleMesh(90, 90, 100e3, 100e3)

nlayers = 20
dt = 15.
t_export = 150.
t_end = 50*t_export

P1_2d = FunctionSpace(mesh2d, 'CG', 1)
bathymetry_2d = Function(P1_2d, name='bathymetry')
bathymetry_2d.assign(200.)

solver_obj = solver.FlowSolver(mesh2d, bathymetry_2d, nlayers)
options = solver_obj.options
options.element_family = 'dg-dg'
options.timestepper_type = 'SSPRK22'
options.solve_salinity = True
options.solve_temperature = False
options.use_implicit_vertical_diffusion = False
options.use_bottom_friction = False
options.use_turbulence = False
options.use_baroclinic_formulation = False
options.simulation_export_time = t_export
options.simulation_end_time = t_end
options.timestepper_options.use_automatic_timestep = False
options.timestep = dt
options.no_exports = True

solver_obj.assign_initial_conditions(salt=Constant(30.0))
solver_obj.iterate()

Running this with 96 cores, show ~linear increase in memory consumption (of rank 0):

pidstat -h -r -p 19096 15
Linux 3.10.0-514.26.2.el7.x86_64 (compute-0-0)  05/03/2019      _x86_64_        (16 CPU)

#      Time   UID       PID  minflt/s  majflt/s     VSZ    RSS   %MEM  Command
 1556868669 51118     19096  11164.60      0.13 2113196 378416   0.58  python
 1556868684 51118     19096   4139.53      0.00 2113732 379028   0.58  python
 1556868699 51118     19096   2289.73      0.00 2115304 380472   0.58  python
 1556868714 51118     19096    416.73      0.00 2116508 381656   0.58  python
 1556868729 51118     19096   1909.07      0.00 2117632 382872   0.58  python
 1556868744 51118     19096    395.40      0.00 2118832 384096   0.58  python
 1556868759 51118     19096     78.07      0.00 2119928 385260   0.59  python
 1556868774 51118     19096   1885.40      0.00 2121128 386408   0.59  python
 1556868789 51118     19096     43.33      0.00 2122364 387516   0.59  python
 1556868804 51118     19096    401.07      0.00 2123428 388728   0.59  python
 1556868819 51118     19096     57.40      0.00 2124628 389808   0.59  python
 1556868834 51118     19096   1868.27      0.00 2125828 391136   0.59  python
 1556868849 51118     19096    264.40      0.00 2126956 392296   0.60  python
 1556868864 51118     19096    789.93      0.00 2128052 393464   0.60  python
 1556868879 51118     19096    747.53      0.00 2129508 394624   0.60  python
 1556868894 51118     19096   2065.40      0.00 2130708 395788   0.60  python
 1556868909 51118     19096     33.87      0.00 2131832 396780   0.60  python
 1556868924 51118     19096    447.13      0.00 2133068 398096   0.61  python
 1556868939 51118     19096     60.67      0.00 2134168 399272   0.61  python
 1556868954 51118     19096   3031.33      0.00 2135300 400472   0.61  python
 1556868969 51118     19096     42.60      0.00 2136532 401472   0.61  python
 1556868984 51118     19096    472.20      0.00 2137596 402816   0.61  python
 1556868999 51118     19096     63.07      0.00 2138792 403960   0.61  python
 1556869014 51118     19096   1782.80      0.00 2139892 405208   0.62  python
 1556869029 51118     19096     47.60      0.00 2141120 406320   0.62  python
 1556869044 51118     19096    465.93      0.00 2142320 407600   0.62  python
 1556869059 51118     19096     76.80      0.00 2143520 408640   0.62  python
 1556869074 51118     19096   1779.67      0.00 2144620 409912   0.62  python

Conservative ALE formulation

On a static mesh a forward Euler update of field T can be expressed as

M*T_{n+1} - M*T_{n} = dt*L

where M is the mass matrix and L is the linear form of the equation.
If the mesh is time dependent, the conservative arbitrary Lagrangian-Eulerian (ALE) formulation becomes

M_{n+1}*T_{n+1} - M_{n}*T_{n} = dt*L

where the mass matrices now depend on time. Here L is also modified to take into account the mesh velocity.

What is the easiest way to implement this in firedrake? My guess is that it would make sense to compute and store M_{n} T_{n} in the previous time step and then use it to compute T_{n+1} the next iteration. Thus there would be no need to store the mass matrices.

Alternatively you could cast the old solution to the new mesh and then do a conventional solve on a static mesh:

M_{n+1}*T_{n+1} - M_{n+1}*P*T_{n} = dt*L

with P = inv(M_{n+1})*M_{n}. But this is probably costly.

This should work on arbitrary function spaces: The formulation applies to both tracers (which are most likely DG) and velocity (which can be partially continuous).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.