ecmwf / atlas Goto Github PK
View Code? Open in Web Editor NEWA library for numerical weather prediction and climate modelling
Home Page: https://sites.ecmwf.int/docs/atlas
License: Apache License 2.0
A library for numerical weather prediction and climate modelling
Home Page: https://sites.ecmwf.int/docs/atlas
License: Apache License 2.0
Hello @wdeconinck. My problem is the following: I create an ATLAS structure (e.g. a NodeColumns
object) in a C++ layer, and I would like to access it from a Fortran subroutine that is called by the C++ layer.
I have noticed that in Fortran, an atlas_functionspace_NodeColumns
object can be initialized with a C pointer type(c_ptr)
, which is probably the solution to my problem.
However, once I have generated the ATLAS structure in C++, I don't know what to pass to the Fortran interface, in order to access and modify the data from the Fortran routine. Do you know how to do that?
Thank you.
Building atlas and running the ctests gives this:
52: Test command: /home/h01/frwd/cylc-run/AtlasFailure/share/build-mo-spice_gnu_debug/atlas/src/tests/grid/atlas_fctest_unstructuredgrid
52: Working Directory: /home/h01/frwd/cylc-run/AtlasFailure/share/build-mo-spice_gnu_debug/atlas/src/tests/grid
52: Environment variables:
52: OMP_NUM_THREADS=1
52: Test timeout computed to be: 10000000
52: /spice/scratch/frwd/cylc-run/AtlasFailure/share/mo-bundle/atlas/src/tests/grid/fctest_unstructuredgrid.F90:228: warning: FCTEST_CHECK_EQUAL( grid%owners(), 2 )
52: --> [ 1 != 2 ]
52: STOP 1
Building atlas and running the ctests, I am also using head of develop of eckit and fckit.
head of develop
Linux & Cray, gcc
52: Test command: /home/h01/frwd/cylc-run/AtlasFailure/share/build-mo-spice_gnu_debug/atlas/src/tests/grid/atlas_fctest_unstructuredgrid
52: Working Directory: /home/h01/frwd/cylc-run/AtlasFailure/share/build-mo-spice_gnu_debug/atlas/src/tests/grid
52: Environment variables:
52: OMP_NUM_THREADS=1
52: Test timeout computed to be: 10000000
52: /spice/scratch/frwd/cylc-run/AtlasFailure/share/mo-bundle/atlas/src/tests/grid/fctest_unstructuredgrid.F90:228: warning: FCTEST_CHECK_EQUAL( grid%owners(), 2 )
52: --> [ 1 != 2 ]
52: STOP 1
No response
Met Office
The Météo-France ARPEGE grid is a stretched/rotated Gaussian reduced grid. Its coordinates are not properly generated by Atlas.
To be done after we have static B working with trans and the cubed-sphere.
In order to know if going into MPI for interpolation is required or not.
The current ForEach method is particularly slow when processing a large number number of small arrays. This is probably due to the overhead of using a config object to to set the execution policy.
Issue addressed by PR 113.
What's the design rationale for having the halo on the function space constructor and not on the mesh constructor, since it changes the mesh? It's quite unexpected.
My pattern is:
atlas::StructuredGrid structuredGrid = atlas::Grid("O32");
atlas::MeshGenerator::Parameters generatorParams;
generatorParams.set("triangulate", true);
generatorParams.set("angle", -1.0);
auto mesh = atlas::StructuredMeshGenerator generator(generatorParams);
atlas::functionspace::NodeColumns fs_nodes_(mesh, atlas::option::levels(nb_levels)
/*| atlas::option::halo(1)*/); // <--
std::cout << mesh.nodes().size() << std::endl;
I would like to be able to have an atlas fieldset that contains a mix of types, both float fields and double fields. However, I hit an issue when it comes to interpolating fields that contain missing values.
At configuration time, I need to configure my atlas interpolator for my entire fieldset. The only way to configure the Missing
adjuster for the interpolation weights is at this time, using a configuration that is passed into a builder here:
However, this configurations holds globally for all fields in the fieldset, meaning that they all must have a matching type. If this is not the case, an error is thrown when attempting to construct a field-view of an incompatible datatype:
For example, using missing-if-all-missing
requires all fields to be interpolated are doubles. Using missing-if-all-missing-real32
requires all fields to be interpolated are floats.
I think ideally, the missing values object datatype would be configured based on the runtime atlas field type, rather than on the configuration string passed to the factory. This would have two benefits I think:
Met Office
A jacobian calculation for each projection would make it possible to compute the base vectors of the projection, and the interpolation of vector fields between different projections.
This feature is therefore highly desirable.
Some JEDI applications will want to use the MeshBuilder but also to call mesh.grid()
. Currently the grid is not set.
Setting the grid requires a global list of point coordinates.
I propose this change: MeshBuilder takes an MPI comm as argument, does an allGather of the (owned) point lons/lats, and constructs an UnstructuredGrid from the global list of points.
But there are some alternative options:
I think the propose change is a more straightforward design and easier to think about. The alternative options put more burden on the user, and are a more complicated design (build a global list but then need to find a subset), but do avoid adding an MPI comm to the argument list.
@wdeconinck do you have a preferred design?
No response
No response
UCAR/JCSDA
This method returns the number of grid points held by each MPI task.
Creating a StructuredColumns functionspace with a halo with this kind of grid leads to a huge, unrealistic halo. It is possible to see the problem simply by looking at fs.sizeOwned ().
It would be very desirable that structured grids have methods :
Our application spends a lot of time applying the 'nonlinear' adjustment to the interpolation weights during interpolation. I believe this is a significant opportunity for performance improvements.
I am imagining we could speed this up by utilising a similar approach to that used by other systems. We could cache the weights including applying a land mask at every level. This has the downside of increasing the size of the weights in memory by a multiple of the number of levels. However, it has the upside that we could use standard sparse matrix multiplication techniques to apply the weights when interpolating, rather than re-adjusting the weights for every field and every level at each application in here:
I think this would be the equivalent of adding 4 extra fields to a fieldset in terms of memory cost, but lead to a O(n_levels*n_fields) speed up in the interpolation application for multiple field, multiple level fieldsets.
There are other potential improvements to the interpolation with missing values (such as reducing the number of copies). One suggestion in a comment in the code is to perform only copy on write. I think my suggestion should be considerably more performant than anything else I can think of, but I am open to suggestions by all means.
No response
Met Office
The Quad2D class can sometimes fail to find the intersect of a point which lies on the corner or edge of an irregular quad. Draft PR#101 demonstrates the occurrence of this bug.
Following on from work by @MarekWlasak, functionality can be added to generate fields for a PointCloud functionspace when ridx
and part
fields are missing at construction.
This work builds upon #111 and #112 and is currently being addressed by #120.
From my reading of the code the current capability allows xy space to have a varied delta_x in the x direction, but a fixed delta_y in the y direction, when using StructuredGrid data structure.
The MetOffice is regional ukv model is a rotated grid with is separable in the x and y directions but has varied delta_x in the x direction and varied delta y in the y direction (It has a constant delta_x and delta_y in an internal region of 1.5 km and then stretches to 4 km in a border region. )
There are three possilble implementations - @wdeconinck - it would be great to get your thoughts on this.
All involve creating an additional example grid.
Version A:
Version B:
Version C :
It would be desirable to have such a binding.
Citing @wdeconinck
This “ComputeNorth” class is used to detect the y-index (or latitude-index) of the latitude North of a given coordinate.
During the construction of this class, it sets up some small arrays to help. It is created for global grids where it is more complicated to deal with extra latitude-rows that are going North of the North Pole (and South of the South Pole).
For a regional grid this should be much simpler, and easy to implement. Just did not get around to it, and nobody had any use case for it, until today
Problem:
Tasks:
Easier diagnosis: create automatic detection of one worst example of a target cell covering to be easily visualised in python polygon viewer.
Fix problem by revision of polygon intersection algorithm
Problem:
Tasks:
Problem:
eckit::linalg::Triplet
then contains multiple triplets corresponding to the same row, col of the interpolation matrix. Before creating the interpolation matrix these are now merged into unique contributions, which is very inefficient.Task:
Running the atlas ctests with head of develop compiled in conjunction with ectrans gives a ctest failure in atlas_fctest_trans.
This is compiled against eckit 1.24, fckit 0.10.1, fiat 1.1.1 and ectrans 1.2.0. The above was produced with gnu compiler.
Header of develop
Cray and Linux
182/197 Test #166: atlas_fctest_trans ........................................***Failed 11.49 sec
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:113: warning: FCTEST_CHECK_EQUAL( trans_spectral%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
--> [ 6 != 506 ]
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:113: warning: FCTEST_CHECK_EQUAL( trans_spectral%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
--> [ 6 != 506 ]
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:113: warning: FCTEST_CHECK_EQUAL( trans_spectral%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
--> [ 6 != 506 ]
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:113: warning: FCTEST_CHECK_EQUAL( trans_spectral%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
--> [ 6 != 506 ]
nodes_fs%owners() 1
nodes_fs%owners() 2
nodes_fs%owners() 3
spectral_fs%owners() 1
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:129: warning: FCTEST_CHECK_EQUAL( spectral_fs%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
--> [ 6 != 506 ]
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:129: warning: FCTEST_CHECK_EQUAL( spectral_fs%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
--> [ 6 != 506 ]
spectral_fs%owners() 2
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:129: warning: FCTEST_CHECK_EQUAL( spectral_fs%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
spectral_fs%owners() 3
--> [ 6 != 506 ]
shape = 4
/research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:129: warning: FCTEST_CHECK_EQUAL( spectral_fs%nb_spectral_coefficients_global(), (truncation+1)*(truncation+2) )
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
--> [ 6 != 506 ]
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x2b262446390f in ???
#0 0x2b47d9b6090f in ???
#1 0x408ad0 in __fctest_atlas_trans_MOD_test_trans
at /research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:146
#2 0x403f77 in run_fctest_atlas_trans
at /home/d03/frwd/cylc-run/UpdateAtlas/share/build-mo-cray_gnu/atlas/src/tests/trans/fctest_trans_main.F90:22
#3 0x403f77 in main
at /home/d03/frwd/cylc-run/UpdateAtlas/share/build-mo-cray_gnu/atlas/src/tests/trans/fctest_trans_main.F90:3
#1 0x408ad0 in __fctest_atlas_trans_MOD_test_trans
at /research/scratch/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/tests/trans/fctest_trans.F90:146
#2 0x403f77 in run_fctest_atlas_trans
at /home/d03/frwd/cylc-run/UpdateAtlas/share/build-mo-cray_gnu/atlas/src/tests/trans/fctest_trans_main.F90:22
#3 0x403f77 in main
at /home/d03/frwd/cylc-run/UpdateAtlas/share/build-mo-cray_gnu/atlas/src/tests/trans/fctest_trans_main.F90:3
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 21571 RUNNING AT nid00441
= EXIT CODE: 139
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions
No response
Met Office
Found a couple of nasty bugs that slipped through the tests.
First one is where the CubedSphereBilinear class builds the interpolation matrix for every single target point!
Second one involves the CubedSphereProjectionBase returning a tiles object by value instead of const reference. @MarekWlasak can you confirm that this behaviour is not intentional?
Associated with #104
It has been noticed that looping through fields using field.levels()
, e.g:
for (atlas::idx_t horizontal = 0; horizontal < someFieldView.shape(0); ++horizontal) {
for (atlas::idx_t vertical = 0 ; vertical < someField.levels(); ++ vertical) {
// do something...
is significantly (~50-100x !) slower than first retrieving the levels from the field:
const atlas::idx_t fieldLevels = someField.levels()
for (atlas::idx_t horizontal = 0; horizontal < someFieldView.shape(0); ++horizontal) {
for (atlas::idx_t vertical = 0 ; vertical < fieldLevels; ++ vertical) {
// ...
Alternatively, using someFieldView.shape(1)
as a proxy for the number of levels is also significantly faster.
Looking at the Field.levels()
function, it seems that the levels
is retrieved from a Metadata
object which I imagine is expensive.
Is there a reason it is done this way?
If there's not a strong reason for having the levels()
pull a value from a Metadata object, could this be a member varaible in Field?
If it's not possible to have it as a member variable, it should be documented so that other users of Atlas are aware of the potentially unwanted behaviour.
No response
Met Office
Atlas PR #163 adds functionality to accurately interpolate vector field defined in lol-lat space on the unit-sphere. Here, we've added an interpolation::method::SphericalVector
class. Doxygen given below:
/// @brief Interpolation post-processor for vector field interpolation
///
/// @details Takes a base interpolation config keyed to "scheme" and creates
/// A set of complex intepolation weights which rotate source vector
/// elements into the target elements' individual frame of reference.
/// Method works by creating a great-circle arc between the source
/// and target locations; the amount of rotation is determined by the
/// difference the in the great-cricle course (relative to north) at
/// the source and target location.
/// Both source and target fields require the "type" metadata to be
/// set to "vector" for this method to be invoked. Otherwise, the
/// base scalar interpolation method is invoked.
/// Note: This method only works with matrix-based interpolation
/// schemes.
Currently, the code additions test the following.
Suggested further tests and functionality:
StructuredColumns
functionspace.@wdeconinck @pmaciel do you agree with the above points. Please feel free to add/delete change.
Are there any of the above that need to go into PR #163. I would personally prefer follow-up PRs for the three groups, but I'm completely open to alternatives.
@mo-lormi and myself are well positioned to address the code coverage and adjoint work.
The Fortran grid module lacks bindings to create rotated/stretched Gaussian grids.
The ORCA grid is a tripolar grid.
The grid is has a structured i
,j
connectivity.
The point coordinates themselves are however not computable but given by external data file.
As previously discussed, the Met Office has a temporary requirement where we have to make sure that Atlas builds and runs using GNU 7.3 compilers.
GNU 7.3 supports all of the C+17 language additions and most of the standard library additions, so the impact on Atlas should be minimal. Supported features.
We currently believe we should be able to remove this requirement over the next six to twelve months.
The matching mesh partitioning in all types but the cubed sphere requires grid.domain().global()
. This is enforced by the ASSERT
in
but not in https://github.com/ecmwf/atlas/blob/develop/src/atlas/grid/detail/partitioner/MatchingMeshPartitionerCubedSphere.cc
I don't understand why this is only allowed for cubed sphere?
This prevent for example functionality for a regional domain to be regridded (i.e. changing its resolution). For example the lambert projections are using SphericalPolygon for matching mesh. Most of the lambert projected model grid are regional, as far as I know.
We could allow grid.domain().regional()
in matching mesh for structured grids but checking if the coordinates of the 4 corners or the two domains are matching, this will ensure that the problem is well posed. But I am sure there must be other considerations that I am not aware of.
No response
No response
JCSDA
I believe I have set the metadata on my fields to reflect the missing values in my data (e.g field.metadata().set("missing_value")
). When I interpolate to a point on the edge of a missing-data region, I find that the missing values are weighted in the same away as non-missing values. This is a problem, as it mixes the missing_value into my data. Is it possible to stop this in atlas at the moment?
If not, I propose the conservative solution of setting the interpolated value to a missing value if any of the points used in the interpolation are missing data.
StructuredColumns
contains a struct template FixupHaloForVectors
which is applied to fields which have type
set to vector
in their metadata. This multiplies field values beyond the poles by -1
directly after a halo exchange.
The SphericalVector
interpolation scheme also checks for type == vector
before applying the vector interpolation method. However, the factor of -1
applied by FixupHaloForVectors
corrupts the interpolation at the poles (see below).
The work-around, currently used in the atlas_test_interpolation_spherical_vector
ctest is to make sure that the dirty
value in the field metadata is set to false
. This produces a clean interpolation at the poles (see below), but it's clearly not useful in the general case where halo-exchanges are required.
The bug can be reproduced by commenting out this line from the interpolation test.
develop
All builds
No response
No response
Met Office
atlas_fctest_grids fails as follows:
/spice/scratch/frwd/cylc-run/AtlasFailure/share/lfric-bundle/atlas/src/tests/grid/fctest_grids.F90:64: warning: FCTEST_CHECK_EQUAL( spec%json(), '{"domain":{"type":"global"},"name":"O32","projection":{"type":"lonlat"}}' )
--> [{"name":"O32","domain":{"type":"global"},"projection":{"type":"lonlat"}}!=
{"domain":{"type":"global"},"name":"O32","projection":{"type":"lonlat"}}]
Atlas version 0.23.0. It looks like the string representation of a dictionary type structure is being generated with the elements in a different order leading to a string comparison failing although eyeballing the elements suggests they are in fact the same.
I am getting compile failures with an older version of the Intel compiler (18.0.5.274):
compilation aborted for /home/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/atlas/interpolation/method/structured/QuasiCubic2D.cc (code 2)
gmake[2]: *** [atlas/src/atlas/CMakeFiles/atlas.dir/interpolation/method/structured/QuasiCubic2D.cc.o] Error 2
compilation aborted for /home/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/atlas/interpolation/method/structured/Linear2D.cc (code 2)
gmake[2]: *** [atlas/src/atlas/CMakeFiles/atlas.dir/interpolation/method/structured/Linear2D.cc.o] Error 2
/home/d00/darth/opt/bb-stack/cray_intel-v24/include/boost/container/vector.hpp(1125): internal error: bad pointer
boost::container::destroy_alloc_n
^
compilation aborted for /home/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/atlas/mesh/actions/BuildConvexHull3D.cc (code 4)
gmake[2]: *** [atlas/src/atlas/CMakeFiles/atlas.dir/mesh/actions/BuildConvexHull3D.cc.o] Error 4
which can be fixed by adjust some atlas cmake options, and
/home/d03/frwd/cylc-run/UpdateAtlas/share/mo-bundle/atlas/src/atlas/interpolation/method/structured/StructuredInterpolation2D.tcc(521): error: identifier "west" is undefined
auto [west, east] = compute_src_west_east(source());
^
detected during instantiation of "void atlas::interpolation::method::StructuredInterpolation2D::do_execute(const atlas::FieldSet &, atlas::FieldSet &, atlas::interpolation::Method::Metadata &) const [with Kernel=atlas::interpolation::method::QuasiCubicHorizontalKernel]" at line 453
which as far as I can tell requires a code change to fix.
Fortran size binding applies to objects of class atlas_grid :
function atlas_Grid__size(this) result(npts)
use, intrinsic :: iso_c_binding, only: c_long
use atlas_grid_Grid_c_binding
class(atlas_Grid), intent(in) :: this
integer(c_long) :: npts
npts = atlas__grid__Grid__size(this%CPTR_PGIBUG_A)
end function
But the C++ implementation accepts a Structured grid object :
idx_t atlas__grid__Structured__size( Structured* This ) {
ATLAS_ASSERT( This != nullptr, "Cannot access uninitialised atlas_StructuredGrid" );
return This->size();
}
Moreover it would desirable to have access to the spec method of Grid objects from Fortran.
Plan is to add a named mpi communicators into configuration objects passed to most data structures such as FunctionSpace
The method field.rename(<newname>)
does not update the index of the fieldset, as seen in this code snippet:
Line 117 in 0e00ad9
Instead, it only updates the metadata, as seen here:
atlas/src/atlas/field/detail/FieldImpl.h
Line 102 in 0e00ad9
This means that certain methods, such as fieldset.has(<newname>)
, become rather useless for renamed fieldsets since they check on index map and not the metadata.
atlas/src/atlas/field/FieldSet.cc
Lines 44 to 46 in 0e00ad9
fieldset.field_names()
returns the correct changed names as I think it uses the metadata.
create a filedset
rename the fieldset: fieldset.field(old_name).rename(new_name)
try using fieldset.field_names() : this returns [new_name]
try using fieldset.has(new_name)
try using fieldset.has(old_name)
ecmwf-atlas-0.32.1
MacOS 12.6.2
No response
No response
UCAR/JCSDA
The finite-element interpolation type will attempt to check nearest neighbour cells when a target location is not within the cell suggested by the KDTree search (see FiniteElement.cc#L263).
I think this can be useful, as the KDTree is only based upon the cell centres and may not be perfectly accurate for non-rectangular cells. However, the default behaviour is for this check to search 20% of the grid. This massively impacts performance on large grids where the grid is not global and the target point is outside the grid boundary.
I would like to add an option to set the fraction of the grid to check, with a minimum of 0 such that only the nearest neighbour cells are checked.
It is possible to set the maxFractionElemsToTry through the configuration of an atlas interpolator, and this behaviour is covered by a test.
Trying to build atlas against ectrans (head of develop) gives this compile failure:
/home/h01/frwd/cylc-run/mi-be984/work/1/git_clone_atlas/atlas/src/atlas/library/Library.cc:63:10: fatal error: ectrans/transi.h: No such file or directory
63 | #include "ectrans/transi.h"
| ^~~~~~~~~~~~~~~~~~
Compile atlas with -DATLAS_ENABLE_ECTRANS=ON. Ectrans has been installed and ecbuild claims to have found it.
head of develop
Linux.
No response
No response
Meto
No bindings are available for creating or handling a distribution from Fortran, except a trivial constructor.
Currently when I run the ATLAS ctests the script in tools/atlas-run uses srun when it is available. However I would like to use mpiexec instead. I cannot easily get rid of srun from the path as it is in /usr/bin. Is it possible to enable the use of mpiexec in this situation?
There is a constructor in datatype for this already, but the function spaces don't use it for creating fields. This issue will deal with this.
Hi. I think that this is something that I could tackle, initially for structured data. Is that o.k @wdeconinck @ytremolet ? Please tell me either way. I won't be offended if you say no. If the answer is yes, then tell me how long I have got.
Creating such a distribution requires several Gb of memory per MPI task and takes a long time (minutes) to get created, which is not acceptable for the application we work on.
We need to add some additional functionality to the cubed-sphere projection due to incompatibilities the current projection and the interpolation framework in https://github.com/JCSDA-internal/oops.
Specifically, we need to add some functionality to convert between (lon, lat) and (alpha, beta) for any cubed sphere panel/tile t. Currently the projection forces a strict mapping from (lon, lat) to a specific t.
sudo ./build_hello_world.sh
Building project hello_world
also
ian@ian-HP-Stream-Laptop-11-y0XX:~/ecmwfatlas$ cmake atlas
CMake Error at CMakeLists.txt:14 (find_package):
Could not find a package configuration file provided by "ecbuild"
(requested version 3.4) with any of the following names:
ecbuildConfig.cmake
ecbuild-config.cmake
Add the installation prefix of "ecbuild" to CMAKE_PREFIX_PATH or set
"ecbuild_DIR" to a directory containing one of the above files. If
"ecbuild" provides a separate development package or SDK, be sure it has
been installed.
-- Configuring incomplete, errors occurred!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.