Git Product home page Git Product logo

hpddm's People

Contributors

jacobfaib avatar prj- avatar stefanozampini avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hpddm's Issues

GCRODR with flexible preconditioning

I would like to setup GCRODR with flexible preconditioning.
According to the manual you have to pass the option -ksp_hpddm_variant flexible. At the same time the manual says that this option is is superseded by KSPSetPCSide(KSP ksp, PCSide side). However, KSPSetPCSide only gives you the options PC_LEFT, PC_RIGHT and PC_SYMMETRIC.
When I pass the -ksp_hpddm_variant flexible option, PetscCall(KSPView(ksp,PETSC_VIEWER_STDOUT_WORLD)) gives me the following output:

KSP Object: 1 MPI process
type: hpddm
HPDDM type: gcrodr
precision: DOUBLE
deflation subspace attached? TRUE
deflation target: SM
maximum iterations=500, initial guess is zero
tolerances:  relative=1e-06, absolute=1e-50, divergence=10000.
right preconditioning
using UNPRECONDITIONED norm type for convergence test

Do I really have flexible preconditioning now?
It would be nice if flexible preconditioning would be confirmed in this info statement.

Using iterative solver for subdomain through FreeFEM interface

It looks like the default solver for the subdomain is set to direct solver. However, I want to change it to iterative solver. I am using FreeFEM as an interface to hpddm. Is there a document or simple example/tutorial explaining how to change the subdomain solver to iterative solver in FreeFEM/hpddm environment?

Using HPDDM with PETSc

Hello,
in your readme you mention that hpddm can be used via PETSc, however there is apparently no further documentation available. Also, when I check the compilation options of PETSc, there is no mention of hpddm. Is it still possible? If so, could you please tell me how to link those two packages?

Spurious values inside eigenvectors

This may sometimes happen when using penalized nonhomogeneous Dirichlet boundary conditions. As a result, two-level preconditioners converge badly.

HPDDM with PETSc/SLEPc on powerpc

I have recently been trying to install FreeFEM on Marconi100 powerpc and I have run into some troubles. Eventually we fixed the problem by forking the repository and changing hpddm/include/HPDDM.hpp. I will briefly descibe the problem and explain our fix. Hopefully you can use the information to find a neater/more permanent fix.

--
Marconi100 system:

Model: IBM Power AC922 (Whiterspoon)
Racks: 55 total (49 compute)
Nodes: 980
Processors: 2x16 cores IBM POWER9 AC922 at 2.6(3.1) GHz
Accelerators: 4 x NVIDIA Volta V100 GPUs/node, Nvlink 2.0, 16GB
Cores: 32 cores/node, Hyperthreading x4
RAM: 256 GB/node (242 usable)
Peak Performance: about 32 Pflop/s, 32 TFlops per node
Internal Network: Mellanox IB EDR DragonFly++
Disk Space: 8PB raw GPFS storage

Compilers:
spectrum_mpi - 10.3.1
gnu - 8.4.0

--

We tried different routes to do the install (hpddm via FreeFem, hpddm via PETSc, hppdm via PETSc via FreeFEM) but most routes eventually came to the same error of libpetsc.so havind undefined references to some lapack functions (error messages added below). However, libpetsc.so was correctly linked to liblapack.so. The problem is that liblapack.so has an underscore added to the function names (see other picture).

We have traced the source of this discrepancy to hpddm/include/HPDDM.hpp::70 :

#if defined(__powerpc__) || defined(INTEL_MKL_VERSION)
# define HPDDM_F77(func) func
#else
# define HPDDM_F77(func) func ## _
#endif

We solve it by forking the branch and changing this to (https://github.com/DaanVanVugt/hpddm/blob/hpddm_underscore_powerpc_2/include/HPDDM.hpp)

#define HPDDM_F77(func) func ## _

and changing the repository and commit-id of hpddm in petsc/config/BuildSystem/config/packages/hppdm.py

I hope you can take a look and create a more general fix that could work for everyone. I think solving this should also fix the issue for installing directly from FreeFEM (with PETSc, SLEPc and hpddm)
If you require any more information from me just let me know!

image
image (1)

.

Too much workspace allocated in GMRES

If the number of Krylov directions to orthogonalize against is greater than the maximum number of iterations, then there is no need to allocate too much memory.

support for python3

Hi Pierre,
could you add support for python3?
python2 will be unavailable on future Debian/Ubuntu release...

Best
C

PS: for freefem++ which version of hpddm shall I use?

MinGW fails to compile

It is currently not possible to build the library with the latest version of MinGW (from 2013). This is due to two defects: one with lambda functions, the other with std::stoi/std::stof/std::stod (from C++11).

OpenBLAS support?

I've updated hpddm at MacPorts to the version 2.2.2 and discovered that additional patches is required to support OpenBLAS.

Without patch bellow it fails on attempt to link because daxpby function has wrong prefix.

Here the used patch. Anyway, I feel that it isn't right one. The right one should introduce HPDDM_MKL like macros to detect and support OpenBLAS.

diff --git include/HPDDM_BLAS.hpp include/HPDDM_BLAS.hpp
index 4062a1f..2abbc98 100644
--- include/HPDDM_BLAS.hpp
+++ include/HPDDM_BLAS.hpp
@@ -100,7 +100,7 @@ HPDDM_GENERATE_EXTERN_BLAS_COMPLEX(k, std::complex<__fp16>, h, __fp16)
 #  endif
 #  if HPDDM_MKL || (defined(__APPLE__) && !defined(PETSC_HAVE_F2CBLASLAPACK))
 #   if !HPDDM_MKL
-#    define HPDDM_PREFIX_AXPBY(func) catlas_ ## func
+#    define HPDDM_PREFIX_AXPBY(func) cblas_ ## func
 #   else
 HPDDM_GENERATE_EXTERN_MKL_EXTENSIONS(c, std::complex<float>, s, float)
 HPDDM_GENERATE_EXTERN_MKL_EXTENSIONS(z, std::complex<double>, d, double)

Hang on MVAPICH2

When running on MVAPICH2, I got a hang on MPI_Igather. (OpenMPI seemed unaffected.)
Using Wrapper<downscaled_type<K>>::mpi_type() instead of MPI_DATATYPE_NULL fixed it for me.

MPI_Igather(MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, rhs, mu * *DMatrix::_gatherCounts, Wrapper<downscaled_type<K>>::mpi_type(), 0, _gatherComm, rq);
MPI_Wait(rq, MPI_STATUS_IGNORE);
Wrapper<downscaled_type<K>>::template cycle<'T'>(_sizeSplit - (_offset || excluded), mu, rhs + (_offset || excluded ? mu * *DMatrix::_gatherCounts : 0), *DMatrix::_gatherCounts);
super::solve(rhs + (_offset || excluded ? *DMatrix::_gatherCounts : 0), mu);
Wrapper<downscaled_type<K>>::template cycle<'T'>(mu, _sizeSplit - (_offset || excluded), rhs + (_offset || excluded ? mu * *DMatrix::_gatherCounts : 0), *DMatrix::_gatherCounts);
MPI_Iscatter(rhs, mu * *DMatrix::_gatherCounts, Wrapper<downscaled_type<K>>::mpi_type(), MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, 0, _scatterComm, rq + 1);
}
else {
MPI_Igather(rhs, mu * _local, Wrapper<downscaled_type<K>>::mpi_type(), NULL, 0, MPI_DATATYPE_NULL, 0, _gatherComm, rq);

Wrong Dirichlet boundary conditions for substructuring methods

When imposing a BC by setting a row to 0 and its diagonal coefficient to 1, there is sometimes a problem when an unknown is also a Lagrange multiplier. A temporary workaround is to ensure that the original system remains symmetric (by also setting the appropriate column to 0).

Seems like an infinity loop on `schwarz_cpp` on macOS

I'm using hpddm-2.2.4 on macOS 12 via MacPorts. And attempt to run tests leads to infinity loop.

The log:

:info:test /opt/local/bin/mpiexec-mpich-mp -np 1  /opt/local/var/macports/build/_Users_catap_src_macports-ports_math_hpddm/hpddm/work/hpddm-2.2.4/bin/schwarz_cpp -hpddm_verbosity -hpddm_dump_matrices=.trash/output.txt -hpddm_version
:info:test [[email protected]] Sending Ctrl-C to processes as requested
:info:test [[email protected]] Press Ctrl-C again to force abort
:info:test ===================================================================================
:info:test =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
:info:test =   PID 66603 RUNNING AT Kirills-MBP.sa31-home.catap.net
:info:test =   EXIT CODE: 2
:info:test =   CLEANING UP REMAINING PROCESSES
:info:test =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
:info:test ===================================================================================
:info:test YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Interrupt: 2 (signal 2)
:info:test This typically refers to a problem with your application.
:info:test Please see the FAQ page for debugging suggestions
:info:test make: *** [test_bin/schwarz_cpp] Error 2

Before I've interrupted it I've waited for about an hour.

The end of output.txt:

     9998      9997 -9.99999999999999857891452847979962825775146484e+01
     9998      9998 3.99999999999999943156581139191985130310058594e+02
     9998      9999 -9.99999999999999857891452847979962825775146484e+01
     9999      9899 -9.99999999999999857891452847979962825775146484e+01
     9999      9998 -9.99999999999999857891452847979962825775146484e+01
     9999      9999 3.99999999999999943156581139191985130310058594e+02
     9999     10000 -9.99999999999999857891452847979962825775146484e+01
    10000      9900 -9.99999999999999857891452847979962825775146484e+01
    10000      9999 -9.99999999999999857891452847979962825775146484e+01
    10000     10000 3.99999999999999943156581139191985130310058594e+02

BTW it reaches that in bling of the eye, and after that it simple starts to consume CPU and it's all.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.