PyClaw is a Python-based interface to the algorithms of Clawpack and SharpClaw. It also contains the PetClaw package, which adds parallelism through PETSc.
Benchmark test case still need to be defined. We are looking for it. It should be defined in ONE or MAXIMUM TWO DAYS. Based on the selected test case the next steps should be defined.
I don't think it will work as coded right now. It may not be worth supporting this combination anyway, but if not we should check for it and throw a helpful exception.
Setting the aux array is somewhat different from setting q because boundary values must also be set on initialization. Right now the user is required to create a numpy array of the appropriate size and fill it in. Aron suggested providing a simpler interface that would only require the user to provide a function setaux(x, y, dx, dy) that takes arrays of cell centers and dimensions and returns an array of aux values for the cells. Then the grid would have a method that applies this function automatically using the coordinates of the grid. It was agreed that this would be helpful and should be supported, but without removing support for the current approach. The grid could also have a method to impose common "boundary conditions" to generate the ghost cell aux values (zero extrapolation and periodic).
The user will be able to provide a function that accepts q and returns a vector of derived quantities. These two vectors are not necessarily the same size. As a simple example, the user may wish to only output one component of q.
In order to implement this efficiently for parallel runs, it is necessary to create another DA for the derived quantities. The user will be able to specify the frequency of writing both output and checkpoint files separately (the checkpoint files will contain just q).
FAILED (errors=1)
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#valgrind[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
[0]PETSC ERROR: likely location of problem given in stack below
[0]PETSC ERROR: --------------------- Stack Frames ------------------------------------
[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
[0]PETSC ERROR: INSTEAD the line number of the start of the function
[0]PETSC ERROR: is given.
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
[unset]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
We have been bitten by this 3 times now. @Ahamadia, what is the best way to do this? Can we check the architecture we're on from inside Python? Or we could move the logging configuration files around in our setup scripts on Shaheen.
We should never need to communicate aux, since every processor knows how to fill its own ghost cells. But it seems very difficult to find a good way to code this without calling globalToLocal.