Git Product home page Git Product logo

asimov-ccs's Introduction

ASiMoV-CCS

ASiMoV-CCS is a CFD and combustion code designed to scale to large numbers of cores. It follows a "separation of concerns' design that separates interfaces from implementations, and physics from parallelisation. There is a distinction between "user" and "back-end" code, i.e. case specific code (such as the setup of a particular test) is the user code, while the core functionality provided by ASiMoV-CCS is the back-end code.

ASiMoV-CCS is implemented in a modular fashion by separating the interface declarations contained within modules from their implementation in submodules. As a result, it is possible to implement multiple physics models and parallelisation strategies by writing separate submodules, each providing a distinct solution.

Prerequisites

N.B. although the build system currently requires both, only one of ParHIP or ParMETIS is used to partition the problem.

Building

Set the following environment variables:

  • PETSC_DIR to point to the PETSc install directory
  • FYAMLC to point to the root of your fortran-yaml-c build directory
  • ADIOS2 to point to the ADIOS2 install directory
  • PARHIP to point to the root of the ParHIP install directory (optional)
  • PARMETIS to point to the root of the ParMETIS install directory (optional)
  • RCMF90 to point to the root of the rcm-f90 install directory

N.B. at least one of PARHIP or PARMETIS must be set when building.

With the prerequisites in place, ASiMoV-CCS can be built from the root directory with

make CMP=<compiler> all

where <compiler> is one of: gnu, intel or cray. CMP sets the compilation environment according to files in build_tools/archs/Makefile.<compiler>, which can be customised according to your environment. This also packages up the compiled code into a library libccs.a, which can be built explicity with

make CMP=<compiler> lib

Tests can be run with

make CMP=<compiler> tests

Configuring

The executable ccs_app that is built/linked is configured by config.yaml. The Fortran program is specified by main: where the available options currently reside in case_setup and are

  • poisson to solve a simple Poisson problem
  • scalar_advection to solve a scalar advection case (frozen velocity field)
  • ldc to run the Lid Driven Cavity problem
  • tgv and tgv2d to run 3-D and 2-D Taylor-Green Vortex, respectively

The build system automatically links the required modules, but submodules need to be specified in the configuration file. base: specifies a collection of basic submodules that work together; available options are mpi and mpi_petsc. Further customisation is available via the options: settings for cases where multiple implementations of the same functionality exist (none at the time of writing). All possible configuration settings can be found in build_tools/config_mapping.yaml.

Running

The generated application can be run as a normal MPI application, for example

mpirun -n 4 /path/to/ccs_app --ccs_case <CaseName>

will use the runtime configuration file CaseName_config.yaml in the current directory. An input path can also be provided as

mpirun -n 4 /path/to/ccs_app --ccs_case CaseName --ccs_in /path/to/case/dir/

where /path/to/case/dir/ contains the configuration file CaseName_config.yaml.

ccs_app accepts a number of runtime command line arguments, see ccs_app --ccs_help for details. If built with PETSc, the normal PETSc command line arguments can be passed to ccs_app as well, in particular if you observe stagnating residuals being printed to stdout you might want to try tighter tolerances for the linear solvers by passing -ksp_atol and -ksp_rtol at runtime, e.g.

mpirun -n ${NP} /path/to/ccs_app <ccs_options> -ksp_atol 1.0e-16 -ksp_rtol 1.0e-50

should give behaviour similar to a direct solver.

Running the Taylor Green Vortex case

Change into the directory where the Taylor Green Vortex configuration file (TaylorGreenVortex2D_config.yaml) resides, i.e.

cd src/case_setup/TaylorGreenVortex

You can change the values in the configuration file to customise your setup.

The command line option --ccs_m allows you to set the size of the mesh (the default is 50x50). You can run the case as follows:

mpirun -n 4 ../../../ccs_app --ccs_m 100 --ccs_case TaylorGreenVortex2D

To run the 3D TGV case, you need to change config.yaml to state main: tgv instead of main:tgv2d, rebuild and use the run line:

mpirun -n 4 ../../../ccs_app --ccs_m 32 --ccs_case TaylorGreenVortex

The default 3D problem size is 16x16x16.

In both cases time evolution of the kinetic energy and enstrophy are written to files tgv-ke.log and tgv-ens.log (tgv2d-*.log in the 2-D case). In the 2-D case there also exists an analytical solution which is used to compute the RMS error in the velocity solution, note that this requires setting the viscosity mu in tgv2d.f90 to match the value of diffusion_factor in fv_common.f90. Using this error log the convergence rate of the central and upwind schemes can be confirmed by systematic refinement of the grid, noting that for the central scheme the Peclet number must be less than 2

Pe = rho U dx / mu < 2

noting that here rho and U are both 1, the grid spacing dx is equal to the domain length (PI) divided by the number of cells in each direction (default 50).

To change the simulated time of either case, edit the appropriate program file and either set the timestep directly or the CFL number.

Running the Lid Driven Cavity case

To run the LDC case, you need to change config.yaml to state main: ldc and rebuild.

Change into the directory where the Lid Driven Cavity configuration file (LidDrivenCavity_config.yaml) resides, i.e.

cd src/case_setup/LidDrivenCavity

You can change the values in the configuration file to customise your setup. For example, by default the number of iterations is set to 10, but you might want to change this to something like 1000.

The command line option --ccs_m allows you to set the size of the mesh (the default is 50x50). You can run the case as follows:

mpirun -n 4 ../../../ccs_app --ccs_m 129 --ccs_case LidDrivenCavity

The results can be plotted against Ghia's reference data by running the plot-structured.py scrip from the results directory

python ../../../scripts/plot-structured.py

this requires either an ADIOS2 build supporting Python or installation of h5py as a fallback, which can be installed via pip install h5py.

Further documentation

Code documentation

CCS uses FORD for code documentation. You can install FORD using pip install ford. Documentation can then be generated using make ford. This will generate HTML documentation in the doc directory with entry point index.html.

See https://github.com/Fortran-FOSS-Programmers/ford and https://forddocs.readthedocs.io/en/latest/ for more information on FORD.

Developer documentation

A developer and style guide can be generated using make dev_guide. This requires latex. The output can be found in dev_guide/ccs_dev_guide.pdf.

Documentation describing the build system, testing framework and linting in CCS can be found here.

asimov-ccs's People

Contributors

alexei-borissov avatar alexeiborissov avatar cgoddard-rr avatar justsz avatar kawechel avatar nanoseb avatar pbartholomew08 avatar rj-jesus avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

asimov-ccs's Issues

Add nvfortran to build system

Need to be able to build specific files with nvfortran/alternative compiler when building overall project with main compiler

Interface AMGx

Use the AMGx solver for the Poisson problem.

In the first instance this should be possible via PETSc, later interface with custom matrix/vector implementations.

Profile the Poisson problem

Profile the Poisson problem running on GPUs. Will probably require adding a "time loop" to increase runtime

Support for LLVM builds

As of now, LLVM Flang (flang-new) is able to build ASiMoV-CCS and its dependencies (though some require patches).

Would a Makefile that enables LLVM builds be welcomed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.