Git Product home page Git Product logo

atlas's People

Contributors

aerorahul avatar anmrde avatar avibahra avatar b8raoult avatar benjaminmenetrier avatar ckuehnlein avatar cosunae avatar danholdaway avatar ddegrauwe avatar dsarmany avatar dtip avatar figi44 avatar fmahebert avatar iainrussell avatar kynan avatar marekwlasak avatar mengaldo avatar mo-lormi avatar mo-marcomilan avatar odlomax avatar oiffrig avatar pmaciel avatar pmarguinaud avatar samhatfield avatar sbrdar avatar svahl991 avatar tlmquintino avatar twsearle avatar wdeconinck avatar ytremolet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atlas's Issues

Accessing the same ATLAS data from both C++ and Fortran

Hello @wdeconinck. My problem is the following: I create an ATLAS structure (e.g. a NodeColumns object) in a C++ layer, and I would like to access it from a Fortran subroutine that is called by the C++ layer.

I have noticed that in Fortran, an atlas_functionspace_NodeColumns object can be initialized with a C pointer type(c_ptr), which is probably the solution to my problem.

However, once I have generated the ATLAS structure in C++, I don't know what to pass to the Fortran interface, in order to access and modify the data from the Fortran routine. Do you know how to do that?

Thank you.

ctest failure in atlas_fctest_unstructuredgrid

What happened?

Building atlas and running the ctests gives this:

52: Test command: /home/h01/frwd/cylc-run/AtlasFailure/share/build-mo-spice_gnu_debug/atlas/src/tests/grid/atlas_fctest_unstructuredgrid
52: Working Directory: /home/h01/frwd/cylc-run/AtlasFailure/share/build-mo-spice_gnu_debug/atlas/src/tests/grid
52: Environment variables:
52: OMP_NUM_THREADS=1
52: Test timeout computed to be: 10000000
52: /spice/scratch/frwd/cylc-run/AtlasFailure/share/mo-bundle/atlas/src/tests/grid/fctest_unstructuredgrid.F90:228: warning: FCTEST_CHECK_EQUAL( grid%owners(), 2 )
52: --> [ 1 != 2 ]
52: STOP 1

What are the steps to reproduce the bug?

Building atlas and running the ctests, I am also using head of develop of eckit and fckit.

Version

head of develop

Platform (OS and architecture)

Linux & Cray, gcc

Relevant log output

52: Test command: /home/h01/frwd/cylc-run/AtlasFailure/share/build-mo-spice_gnu_debug/atlas/src/tests/grid/atlas_fctest_unstructuredgrid
52: Working Directory: /home/h01/frwd/cylc-run/AtlasFailure/share/build-mo-spice_gnu_debug/atlas/src/tests/grid
52: Environment variables: 
52:  OMP_NUM_THREADS=1
52: Test timeout computed to be: 10000000
52: /spice/scratch/frwd/cylc-run/AtlasFailure/share/mo-bundle/atlas/src/tests/grid/fctest_unstructuredgrid.F90:228: warning: FCTEST_CHECK_EQUAL( grid%owners(), 2 )
52:  --> [           1 !=           2 ]
52: STOP 1

Accompanying data

No response

Organisation

Met Office

ForEach method very slow on small arrays

The current ForEach method is particularly slow when processing a large number number of small arrays. This is probably due to the overhead of using a config object to to set the execution policy.

Issue addressed by PR 113.

Creating a FunctionSpace changes Mesh

What's the design rationale for having the halo on the function space constructor and not on the mesh constructor, since it changes the mesh? It's quite unexpected.

My pattern is:

atlas::StructuredGrid structuredGrid = atlas::Grid("O32");
atlas::MeshGenerator::Parameters generatorParams;
generatorParams.set("triangulate", true);
generatorParams.set("angle", -1.0);
auto mesh = atlas::StructuredMeshGenerator generator(generatorParams);
atlas::functionspace::NodeColumns fs_nodes_(mesh, atlas::option::levels(nb_levels)
     /*| atlas::option::halo(1)*/); // <--

std::cout << mesh.nodes().size() << std::endl;

Interpolation::NonLinear datatype incompatible with atlas::Fieldsets of heterogeneous types

Is your feature request related to a problem? Please describe.

I would like to be able to have an atlas fieldset that contains a mix of types, both float fields and double fields. However, I hit an issue when it comes to interpolating fields that contain missing values.

At configuration time, I need to configure my atlas interpolator for my entire fieldset. The only way to configure the Missing adjuster for the interpolation weights is at this time, using a configuration that is passed into a builder here:

static B<M1<float>> __nl3(M1<float>::static_type() + "-" + T<float>());

However, this configurations holds globally for all fields in the fieldset, meaning that they all must have a matching type. If this is not the case, an error is thrown when attempting to construct a field-view of an incompatible datatype:

static array::ArrayView<typename std::add_const<Value>::type, Rank> make_view_field_values(const Field& field) {

For example, using missing-if-all-missing requires all fields to be interpolated are doubles. Using missing-if-all-missing-real32 requires all fields to be interpolated are floats.

Describe the solution you'd like

I think ideally, the missing values object datatype would be configured based on the runtime atlas field type, rather than on the configuration string passed to the factory. This would have two benefits I think:

  1. The methods used would depend solely on the type of the atlas::Field, and so be determined in one place. This lends clarity - in my program at the moment the configuration has two sections which both must be updated depending on the type of the atlas::Field you select.
  2. It would be possible to have fieldsets containing a mix of double and float fields, which would be useful for performance and precision when required.

Describe alternatives you've considered

  1. update the atlas interpolator configuration at runtime based on the field types selected
    • This isn't trivial to do, given the high level layout of the JEDI-OOPS framework
    • This doesn't solve my second issue of wanting a mix of Float and Double types in a fieldset
  2. Use multiple field sets and multiple interpolators with different configurations
    • Its possible this is the way forward, although it would incur a memory and performance cost, as I would have to setup essentially the same interpolation object more than once.
  3. Keep all Fields in my program of a consistent type.
    • This is my approach at the moment, however I have found that I needed to switch from double to float to fit the program in memory, and I am concerned future data (ocean colour is log distributed for example) might be sensitive to the move from double to float.

Additional context

MetOffice/orca-jedi#78

Organisation

Met Office

The projection class lacks a jacobian method

A jacobian calculation for each projection would make it possible to compute the base vectors of the projection, and the interpolation of vector fields between different projections.

This feature is therefore highly desirable.

Set UnstructuredGrid in meshes built by MeshBuilder

Is your feature request related to a problem? Please describe.

Some JEDI applications will want to use the MeshBuilder but also to call mesh.grid(). Currently the grid is not set.

Describe the solution you'd like

Setting the grid requires a global list of point coordinates.

I propose this change: MeshBuilder takes an MPI comm as argument, does an allGather of the (owned) point lons/lats, and constructs an UnstructuredGrid from the global list of points.

But there are some alternative options:

  • MeshBuilder takes a global list of lons/lats as argument (instead of currently taking local points) and uses these to construct an UnstructuredGrid. For setting up the connectivities, getting the right subset from the global list will I think require either a new indexing argument or requiring a structure in the global_indices (e.g.: 1-based, continuous indexing) of the points so that they can be reliably indexed.
  • MeshBuilder takes a Grid as argument, putting the responsibility on the user.

I think the propose change is a more straightforward design and easier to think about. The alternative options put more burden on the user, and are a more complicated design (build a global list but then need to find a subset), but do avoid adding an MPI comm to the argument list.

@wdeconinck do you have a preferred design?

Describe alternatives you've considered

No response

Additional context

No response

Organisation

UCAR/JCSDA

ij2gidx & gidx2ij for structured grids

It would be very desirable that structured grids have methods :

  • to compute global indice from i and j indices
  • to compute i and j indices from the global index

Improve performance of missing-values-containing field interpolation

Is your feature request related to a problem? Please describe.

Our application spends a lot of time applying the 'nonlinear' adjustment to the interpolation weights during interpolation. I believe this is a significant opportunity for performance improvements.

Describe the solution you'd like

I am imagining we could speed this up by utilising a similar approach to that used by other systems. We could cache the weights including applying a land mask at every level. This has the downside of increasing the size of the weights in memory by a multiple of the number of levels. However, it has the upside that we could use standard sparse matrix multiplication techniques to apply the weights when interpolating, rather than re-adjusting the weights for every field and every level at each application in here:

// We cannot apply the same matrix to full columns as e.g. missing values could be present in only certain parts.

I think this would be the equivalent of adding 4 extra fields to a fieldset in terms of memory cost, but lead to a O(n_levels*n_fields) speed up in the interpolation application for multiple field, multiple level fieldsets.

Describe alternatives you've considered

There are other potential improvements to the interpolation with missing values (such as reducing the number of copies). One suggestion in a comment in the code is to perform only copy on write. I think my suggestion should be considerably more performant than anything else I can think of, but I am open to suggestions by all means.

Additional context

No response

Organisation

Met Office

Supporting a stretched rotated limited area grid.

From my reading of the code the current capability allows xy space to have a varied delta_x in the x direction, but a fixed delta_y in the y direction, when using StructuredGrid data structure.

The MetOffice is regional ukv model is a rotated grid with is separable in the x and y directions but has varied delta_x in the x direction and varied delta y in the y direction (It has a constant delta_x and delta_y in an internal region of 1.5 km and then stretches to 4 km in a border region. )

There are three possilble implementations - @wdeconinck - it would be great to get your thoughts on this.
All involve creating an additional example grid.

Version A:

  1. Have the xy object to have a regular grid with constant delta_x and delta_y to that of the interior part.
  2. Create a projection (say stretched_rotated_lonlat") that involves a stretch in the x or y directions (followed by a "unrotate" ... to get to the real longitude and latitudes of the grid. The stretch itself would be defined by reading a vector for both x and y from the yaml file.

Version B:

  1. As Version A but use the stretched capability in xy in the x direction.
  2. The projection would then stretch in the y direction and then rotate.

Version C :

  1. As Version A, but implement a separate stretched projection for both dimensions.
  2. Then add the capability to define projections as a product of a simpler projections.

Support limited area grids in StructuredColumns linear interpolation

Citing @wdeconinck
This “ComputeNorth” class is used to detect the y-index (or latitude-index) of the latitude North of a given coordinate.
During the construction of this class, it sets up some small arrays to help. It is created for global grids where it is more complicated to deal with extra latitude-rows that are going North of the North Pole (and South of the South Pole).
For a regional grid this should be much simpler, and easy to implement. Just did not get around to it, and nobody had any use case for it, until today

Development of efficient conservative spherical polygon interpolation

✅ Phase 1: #130

Problem:

  • Polygon intersection not precise for pole points with more than 10k edges going into it, e.g. S43200x21600 source to O1280 target has significant under-coverage of target cells at the poles.

Tasks:

  • Easier diagnosis: create automatic detection of one worst example of a target cell covering to be easily visualised in python polygon viewer.

  • Fix problem by revision of polygon intersection algorithm

Phase 2: Not started

Problem:

  • The algorithm to find suitable source polygons for intersection makes use of a "compare_pointxyz" function which contains an epsilon that needs to be tweaked depending on grid resolution.
  • The same algorithm is expensive and contains an OpenMP "critical" region.

Tasks:

  • Make use of deterministic information to detect source polygons. Only source polygons which are not in the periodic halo should be considered.

Phase 3: Not started

Problem:

  • The algorithm loops over source polygons and finds suitable target polygons for intersection. Interpolation weight contributions are then assembled for the target polygons. This prevents efficient OpenMP parallelisation, by introducing an OpenMP "critical" region.
  • The resulting vector of eckit::linalg::Triplet then contains multiple triplets corresponding to the same row, col of the interpolation matrix. Before creating the interpolation matrix these are now merged into unique contributions, which is very inefficient.

Task:

  • Invert the algorithm to loop over target polygons to detect source polygons. Probably a can of worms...

Ctest failure in atlas_fctest_trans

What happened?

Running the atlas ctests with head of develop compiled in conjunction with ectrans gives a ctest failure in atlas_fctest_trans.

What are the steps to reproduce the bug?

This is compiled against eckit 1.24, fckit 0.10.1, fiat 1.1.1 and ectrans 1.2.0. The above was produced with gnu compiler.

Version

Header of develop

Platform (OS and architecture)

Cray and Linux

Relevant log output

182/197 Test #166: atlas_fctest_trans ........................................***Failed   11.49 sec
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:113: warning: FCTEST_CHECK_EQUAL( trans_spectral%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
 --> [           6 !=         506 ]
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:113: warning: FCTEST_CHECK_EQUAL( trans_spectral%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
 --> [           6 !=         506 ]
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:113: warning: FCTEST_CHECK_EQUAL( trans_spectral%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
 --> [           6 !=         506 ]
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:113: warning: FCTEST_CHECK_EQUAL( trans_spectral%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
 --> [           6 !=         506 ]
 nodes_fs%owners()           1
 nodes_fs%owners()           2
 nodes_fs%owners()           3
 spectral_fs%owners()           1
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:129: warning: FCTEST_CHECK_EQUAL( spectral_fs%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
 --> [           6 !=         506 ]
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:129: warning: FCTEST_CHECK_EQUAL( spectral_fs%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
 --> [           6 !=         506 ]
 spectral_fs%owners()           2
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:129: warning: FCTEST_CHECK_EQUAL( spectral_fs%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
 spectral_fs%owners()           3
 --> [           6 !=         506 ]
 shape =            4
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:129: warning: FCTEST_CHECK_EQUAL( spectral_fs%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
 --> [           6 !=         506 ]

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0  0x2b262446390f in ???
#0  0x2b47d9b6090f in ???
#1  0x408ad0 in __fctest_atlas_trans_MOD_test_trans
	at /research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:146
#2  0x403f77 in run_fctest_atlas_trans
	at /home/d03/frwd/cylc-run/UpdateAtlas/share/build-mo-cray_gnu/atlas/src/tests/trans/fctest_trans_main.F90:22
#3  0x403f77 in main
	at /home/d03/frwd/cylc-run/UpdateAtlas/share/build-mo-cray_gnu/atlas/src/tests/trans/fctest_trans_main.F90:3
#1  0x408ad0 in __fctest_atlas_trans_MOD_test_trans
	at /research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:146
#2  0x403f77 in run_fctest_atlas_trans
	at /home/d03/frwd/cylc-run/UpdateAtlas/share/build-mo-cray_gnu/atlas/src/tests/trans/fctest_trans_main.F90:22
#3  0x403f77 in main
	at /home/d03/frwd/cylc-run/UpdateAtlas/share/build-mo-cray_gnu/atlas/src/tests/trans/fctest_trans_main.F90:3

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 21571 RUNNING AT nid00441
=   EXIT CODE: 139
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

Accompanying data

No response

Organisation

Met Office

Bugs in cubed sphere implementation.

Found a couple of nasty bugs that slipped through the tests.

First one is where the CubedSphereBilinear class builds the interpolation matrix for every single target point!

Second one involves the CubedSphereProjectionBase returning a tiles object by value instead of const reference. @MarekWlasak can you confirm that this behaviour is not intentional?

Associated with #104

`Field.levels()` is expensive

Is your feature request related to a problem? Please describe.

It has been noticed that looping through fields using field.levels(), e.g:

for (atlas::idx_t horizontal = 0; horizontal < someFieldView.shape(0); ++horizontal) {
  for (atlas::idx_t vertical = 0 ; vertical < someField.levels(); ++ vertical) {
    // do something...

is significantly (~50-100x !) slower than first retrieving the levels from the field:

const atlas::idx_t fieldLevels = someField.levels()
for (atlas::idx_t horizontal = 0; horizontal < someFieldView.shape(0); ++horizontal) {
  for (atlas::idx_t vertical = 0 ; vertical < fieldLevels; ++ vertical) {
    // ...

Alternatively, using someFieldView.shape(1) as a proxy for the number of levels is also significantly faster.

Looking at the Field.levels() function, it seems that the levels is retrieved from a Metadata object which I imagine is expensive.
Is there a reason it is done this way?

Describe the solution you'd like

If there's not a strong reason for having the levels() pull a value from a Metadata object, could this be a member varaible in Field?

Describe alternatives you've considered

If it's not possible to have it as a member variable, it should be documented so that other users of Atlas are aware of the potentially unwanted behaviour.

Additional context

No response

Organisation

Met Office

Adding robust vector-field interpolation functionality (current PR and further work).

Atlas PR #163 adds functionality to accurately interpolate vector field defined in lol-lat space on the unit-sphere. Here, we've added an interpolation::method::SphericalVector class. Doxygen given below:

  /// @brief   Interpolation post-processor for vector field interpolation
  ///
  /// @details Takes a base interpolation config keyed to "scheme" and creates
  ///          A set of complex intepolation weights which rotate source vector
  ///          elements into the target elements' individual frame of reference.
  ///          Method works by creating a great-circle arc between the source
  ///          and target locations; the amount of rotation is determined by the
  ///          difference the in the great-cricle course (relative to north) at
  ///          the source and target location.
  ///          Both source and target fields require the "type" metadata to be
  ///          set to "vector" for this method to be invoked. Otherwise, the
  ///          base scalar interpolation method is invoked.
  ///          Note: This method only works with matrix-based interpolation
  ///          schemes.

Currently, the code additions test the following.

  • Interpolation from a cubedsphere mesh to a gaussian mesh
  • Interpolation from a gaussian mesh to a cubedsphere mesh

Suggested further tests and functionality:

Code coverage

  • Tests needed for a vector interpolation on a horizontal field
  • Tests needed for 3d vectors fields with levels > 1
  • Tests needed for "meaningful" 3-vector fields.

Functionality

  • Implementation of adjoint methods (and relevant tests)

Scientific validation

  • Comparison with vector interpolation methods used with StructuredColumns functionspace.
  • Low-to-high resolution comparions.

@wdeconinck @pmaciel do you agree with the above points. Please feel free to add/delete change.

Are there any of the above that need to go into PR #163. I would personally prefer follow-up PRs for the three groups, but I'm completely open to alternatives.

@mo-lormi and myself are well positioned to address the code coverage and adjoint work.

Support for ORCA (NEMO) grid

The ORCA grid is a tripolar grid.
The grid is has a structured i,j connectivity.
The point coordinates themselves are however not computable but given by external data file.

Temporary addition of GNU 7.3 to GHA

Compiler support

As previously discussed, the Met Office has a temporary requirement where we have to make sure that Atlas builds and runs using GNU 7.3 compilers.

GNU 7.3 supports all of the C+17 language additions and most of the standard library additions, so the impact on Atlas should be minimal. Supported features.

We currently believe we should be able to remove this requirement over the next six to twelve months.

Regional matching mesh partitioning

Is your feature request related to a problem? Please describe.

The matching mesh partitioning in all types but the cubed sphere requires grid.domain().global(). This is enforced by the ASSERT in



but not in https://github.com/ecmwf/atlas/blob/develop/src/atlas/grid/detail/partitioner/MatchingMeshPartitionerCubedSphere.cc
I don't understand why this is only allowed for cubed sphere?

This prevent for example functionality for a regional domain to be regridded (i.e. changing its resolution). For example the lambert projections are using SphericalPolygon for matching mesh. Most of the lambert projected model grid are regional, as far as I know.

Describe the solution you'd like

We could allow grid.domain().regional() in matching mesh for structured grids but checking if the coordinates of the 4 corners or the two domains are matching, this will ensure that the problem is well posed. But I am sure there must be other considerations that I am not aware of.

Describe alternatives you've considered

No response

Additional context

No response

Organisation

JCSDA

Stopping the interpolation of missing values during finite-element interpolation

I believe I have set the metadata on my fields to reflect the missing values in my data (e.g field.metadata().set("missing_value") ). When I interpolate to a point on the edge of a missing-data region, I find that the missing values are weighted in the same away as non-missing values. This is a problem, as it mixes the missing_value into my data. Is it possible to stop this in atlas at the moment?

If not, I propose the conservative solution of setting the interpolated value to a missing value if any of the points used in the interpolation are missing data.

StructuredColumns `FixupHaloForVectors` method interferes with `SphericalVector` interpolation

What happened?

StructuredColumns contains a struct template FixupHaloForVectors which is applied to fields which have type set to vector in their metadata. This multiplies field values beyond the poles by -1 directly after a halo exchange.

The SphericalVector interpolation scheme also checks for type == vector before applying the vector interpolation method. However, the factor of -1 applied by FixupHaloForVectors corrupts the interpolation at the poles (see below).
fixup_halo_exchange

The work-around, currently used in the atlas_test_interpolation_spherical_vector ctest is to make sure that the dirty value in the field metadata is set to false. This produces a clean interpolation at the poles (see below), but it's clearly not useful in the general case where halo-exchanges are required.
fixup_no_halo_exchange

What are the steps to reproduce the bug?

The bug can be reproduced by commenting out this line from the interpolation test.

Version

develop

Platform (OS and architecture)

All builds

Relevant log output

No response

Accompanying data

No response

Organisation

Met Office

atlas_fctest_grids failure

atlas_fctest_grids fails as follows:

/spice/scratch/frwd/cylc-run/AtlasFailure/share/lfric-bundle/atlas/src/tests/grid/fctest_grids.F90:64: warning: FCTEST_CHECK_EQUAL( spec%json(), '{"domain":{"type":"global"},"name":"O32","projection":{"type":"lonlat"}}' )
--> [{"name":"O32","domain":{"type":"global"},"projection":{"type":"lonlat"}}!=
{"domain":{"type":"global"},"name":"O32","projection":{"type":"lonlat"}}]

Atlas version 0.23.0. It looks like the string representation of a dictionary type structure is being generated with the elements in a different order leading to a string comparison failing although eyeballing the elements suggests they are in fact the same.

Compile failures with old Intel compiler

I am getting compile failures with an older version of the Intel compiler (18.0.5.274):

compilation aborted for /home/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/atlas/interpolation/method/structured/QuasiCubic2D.cc (code 2)
gmake[2]: *** [atlas/src/atlas/CMakeFiles/atlas.dir/interpolation/method/structured/QuasiCubic2D.cc.o] Error 2
compilation aborted for /home/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/atlas/interpolation/method/structured/Linear2D.cc (code 2)
gmake[2]: *** [atlas/src/atlas/CMakeFiles/atlas.dir/interpolation/method/structured/Linear2D.cc.o] Error 2
/home/d00/darth/opt/bb-stack/cray_intel-v24/include/boost/container/vector.hpp(1125): internal error: bad pointer
        boost::container::destroy_alloc_n
        ^

 

compilation aborted for /home/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/atlas/mesh/actions/BuildConvexHull3D.cc (code 4)
gmake[2]: *** [atlas/src/atlas/CMakeFiles/atlas.dir/mesh/actions/BuildConvexHull3D.cc.o] Error 4
 

which can be fixed by adjust some atlas cmake options, and

 

/home/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/atlas/interpolation/method/structured/StructuredInterpolation2D.tcc(521): error: identifier "west" is undefined
      auto [west, east] = compute_src_west_east(source());
            ^
          detected during instantiation of "void atlas::interpolation::method::StructuredInterpolation2D::do_execute(const atlas::FieldSet &, atlas::FieldSet &, atlas::interpolation::Method::Metadata &) const [with Kernel=atlas::interpolation::method::QuasiCubicHorizontalKernel]" at line 453

 

which as far as I can tell requires a code change to fix.

Fortran bindings for size & spec

Fortran size binding applies to objects of class atlas_grid :

function atlas_Grid__size(this) result(npts)
  use, intrinsic :: iso_c_binding, only: c_long
  use atlas_grid_Grid_c_binding
  class(atlas_Grid), intent(in) :: this
  integer(c_long) :: npts
  npts = atlas__grid__Grid__size(this%CPTR_PGIBUG_A)
end function

But the C++ implementation accepts a Structured grid object :

idx_t atlas__grid__Structured__size( Structured* This ) {
    ATLAS_ASSERT( This != nullptr, "Cannot access uninitialised atlas_StructuredGrid" );
    return This->size();
}

Moreover it would desirable to have access to the spec method of Grid objects from Fortran.

field.rename does not update the fieldset index map

What happened?

The method field.rename(<newname>) does not update the index of the fieldset, as seen in this code snippet:

void rename(const std::string& name);

Instead, it only updates the metadata, as seen here:

void rename(const std::string& name) { metadata().set("name", name); }

This means that certain methods, such as fieldset.has(<newname>), become rather useless for renamed fieldsets since they check on index map and not the metadata.

bool FieldSetImpl::has(const std::string& name) const {
return index_.count(name);
}

The method fieldset.field_names() returns the correct changed names as I think it uses the metadata.

What are the steps to reproduce the bug?

create a filedset
rename the fieldset: fieldset.field(old_name).rename(new_name)
try using fieldset.field_names() : this returns [new_name]
try using fieldset.has(new_name)
try using fieldset.has(old_name)

Version

ecmwf-atlas-0.32.1

Platform (OS and architecture)

MacOS 12.6.2

Relevant log output

No response

Accompanying data

No response

Organisation

UCAR/JCSDA

Add option to reduce fraction of grid points checked when finite element interpolation fails

Description

The finite-element interpolation type will attempt to check nearest neighbour cells when a target location is not within the cell suggested by the KDTree search (see FiniteElement.cc#L263).

I think this can be useful, as the KDTree is only based upon the cell centres and may not be perfectly accurate for non-rectangular cells. However, the default behaviour is for this check to search 20% of the grid. This massively impacts performance on large grids where the grid is not global and the target point is outside the grid boundary.

I would like to add an option to set the fraction of the grid to check, with a minimum of 0 such that only the nearest neighbour cells are checked.

Acceptance Criteria (Definition of Done)

It is possible to set the maxFractionElemsToTry through the configuration of an atlas interpolator, and this behaviour is covered by a test.

Cannot find ectrans header

What happened?

Trying to build atlas against ectrans (head of develop) gives this compile failure:

/home/h01/frwd/cylc-run/mi-be984/work/1/git_clone_atlas/atlas/src/atlas/library/Library.cc:63:10: fatal error: ectrans/transi.h: No such file or directory
63 | #include "ectrans/transi.h"
| ^~~~~~~~~~~~~~~~~~

What are the steps to reproduce the bug?

Compile atlas with -DATLAS_ENABLE_ECTRANS=ON. Ectrans has been installed and ecbuild claims to have found it.

Version

head of develop

Platform (OS and architecture)

Linux.

Relevant log output

No response

Accompanying data

No response

Organisation

Meto

Allow atlas-run to use something other than srun

Currently when I run the ATLAS ctests the script in tools/atlas-run uses srun when it is available. However I would like to use mpiexec instead. I cannot easily get rid of srun from the path as it is in /usr/bin. Is it possible to enable the use of mpiexec in this situation?

Adjoint of a halo-exchange

Hi. I think that this is something that I could tackle, initially for structured data. Is that o.k @wdeconinck @ytremolet ? Please tell me either way. I won't be offended if you say no. If the answer is yes, then tell me how long I have got.

fatal error: atlas/library.h: No such file or directory and failed cmake run.

sudo ./build_hello_world.sh
Building project hello_world

  • mkdir -p /home/ian/ecmwfatlas/atlas/doc/example-projects/build_hello_world_2021-09-12_05-06-37
  • cd /home/ian/ecmwfatlas/atlas/doc/example-projects/build_hello_world_2021-09-12_05-06-37
  • cmake -DECBUILD_2_COMPAT=OFF /home/ian/ecmwfatlas/atlas/doc/example-projects/project_hello_world
    -- The CXX compiler identification is GNU 9.3.0
    -- Check for working CXX compiler: /usr/bin/c++
    -- Check for working CXX compiler: /usr/bin/c++ -- works
    -- Detecting CXX compiler ABI info
    -- Detecting CXX compiler ABI info - done
    -- Detecting CXX compile features
    -- Detecting CXX compile features - done
    -- atlas_VERSION 0.19.0
    -- atlas_VERSION_STR
    -- atlas_DIR /usr/lib/x86_64-linux-gnu/cmake/atlas_ecmwf
    -- atlas_HAVE_OMP 1
    -- atlas_HAVE_OMP_CXX 1
    -- atlas_HAVE_OMP_Fortran
    -- atlas_HAVE_TRANS
    -- atlas_HAVE_MPI
    -- ATLAS_LIBRARIES
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/ian/ecmwfatlas/atlas/doc/example-projects/build_hello_world_2021-09-12_05-06-37
  • make
    Scanning dependencies of target hello_world
    [ 50%] Building CXX object CMakeFiles/hello_world.dir/hello_world.cc.o
    /home/ian/ecmwfatlas/atlas/doc/example-projects/project_hello_world/hello_world.cc:1:10: fatal error: atlas/library.h: No such file or directory
    1 | #include "atlas/library.h"
    | ^~~~~~~~~~~~~~~~~
    compilation terminated.
    make[2]: *** [CMakeFiles/hello_world.dir/build.make:63: CMakeFiles/hello_world.dir/hello_world.cc.o] Error 1
    make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/hello_world.dir/all] Error 2
    make: *** [Makefile:95: all] Error 2
  • cleanup
  • EXIT_CODE=2
  • rm -Rf /home/ian/ecmwfatlas/atlas/doc/example-projects/build_hello_world_2021-09-12_05-06-37
  • exit 2
    ian@ian-HP-Stream-Laptop-11-y0XX:/ecmwfatlas/atlas/doc/example-projects$ ls
    build_hello_world_fortran.sh build_hello_world.sh project_hello_world project_hello_world_fortran README source-me.sh
    ian@ian-HP-Stream-Laptop-11-y0XX:
    /ecmwfatlas/atlas/doc/example-projects$

also
ian@ian-HP-Stream-Laptop-11-y0XX:~/ecmwfatlas$ cmake atlas
CMake Error at CMakeLists.txt:14 (find_package):
Could not find a package configuration file provided by "ecbuild"
(requested version 3.4) with any of the following names:

ecbuildConfig.cmake
ecbuild-config.cmake

Add the installation prefix of "ecbuild" to CMAKE_PREFIX_PATH or set
"ecbuild_DIR" to a directory containing one of the above files. If
"ecbuild" provides a separate development package or SDK, be sure it has
been installed.

-- Configuring incomplete, errors occurred!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.