Git Product home page Git Product logo

ct-hyb's People

Contributors

dombrno avatar egull avatar fsohn avatar galexv avatar shinaoka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ct-hyb's Issues

'model' parameters are not stored in the HDF5 ouput

Most 'model' parameters are not stored correctly in the HDF5 file in the 'model' section. The values of

/parameters/definitions/model.beta Dataset {SCALAR}
/parameters/definitions/model.command_line_mode Dataset {SCALAR}
/parameters/definitions/model.coulomb_tensor_input_file Dataset {SCALAR}
/parameters/definitions/model.delta_input_file Dataset {SCALAR}
/parameters/definitions/model.hopping_matrix_input_file Dataset {SCALAR}
/parameters/definitions/model.n_tau_hyb Dataset {SCALAR}
/parameters/definitions/model.sites Dataset {SCALAR}
/parameters/definitions/model.spins Dataset {SCALAR}

were either zero or empty, depending on the datatype. I'm not sure about

/parameters/definitions/model.basis_input_file Dataset {SCALAR}

since I don't use it.

Compilation errors on a cray system

I got some feedbacks from fasohn.
The build of CT-HYB fails on a cray system.

He built ALPSCore with intel compiler + boost 1.61.
The problem is that the intel compiler complains when he tries to build the CT-HYB solver (especially tests).
A typical error is the following.
Its seems that this error has something to do with this report (GCC extension issue):
google/googletest#100

I am wondering how ALPSCore avoids this issue.

[ 63%] Building CXX object CMakeFiles/unittest_solver.dir/test/unittest_solver.cpp.o
/opt/cray/craype/2.4.2/bin/CC -I/home/h/nipfsohn/programs/CTHYB/build -I/home/h/nipfsohn/programs/CTHYB/CT-HYB -I/home/h/nipfsohn/programs/source/eigen/eigen-eigen-07105f7124f9 -I/home/h/nipfsohn/programs/CTHYB/CT-HYB/include -I/home/h/nipfsohn/programs/CTHYB/CT-HYB/test -isystem /home/h/nipfsohn/programs/install/include -isystem /home/h/nipfsohn/programs/ALPSCore/install/include -isystem /opt/cray/hdf5-parallel/1.8.16/INTEL/15.0/include -DBOOST_DISABLE_ASSERTS -DNDEBUG -o CMakeFiles/unittest_solver.dir/test/unittest_solver.cpp.o -c /home/h/nipfsohn/programs/CTHYB/CT-HYB/test/unittest_solver.cpp
In file included from /home/h/nipfsohn/programs/CTHYB/CT-HYB/test/gtest.h(1704),
from /home/h/nipfsohn/programs/CTHYB/CT-HYB/test/unittest_solver.hpp(8),
from /home/h/nipfsohn/programs/CTHYB/CT-HYB/test/unittest_solver.cpp(1):
/usr/include/c++/4.3/tr1/tuple(74): error: expected an identifier
template<int _Idx, typename... _Elements>
^

Problem with std::string parameter input

Good evening, I am trying to run a calculation using the latest versions on the master branch of AlpsCore and Alpcore/CT-Hyb, and I guess an error, which seems to have to do with the parameters type. An excerpt from my input.ini file is as follows:

on=0

[measurement.two_time_G2]
on=0
n_legendre=50

[measurement.equal_time_G2]
on=0

[measurement.nn_corr]
on=1
n_tau=3
n_def=10
def="nn_def.txt" 

When executing hybmat, I get the following error:

We cannot open nn_def.txt (type: std::string) (name='measurement.nn_corr.def')!

thrown at line 423 of impurity_init.ipp. The cause of the error is that the value of par[fname_key] is actually the string "nn_def.txt (type: std::string) (name='measurement.nn_corr.def')" instead of the string "nn_def.txt". I guess this is coming from ALPSCore, but I am not that experienced with the inner workings of the parameters type in order to identify the root cause.

Thanks!

I guess

Unsure about update information in HDF5 output

Hi,
recently I had some trouble understanding the information about the updates in the output HDF5 file, e.g. the number of attempts and acceptances of an update. For example, in

/simulation/results/1-pair_insertion_remover_accepted_scalar Group
/simulation/results/1-pair_insertion_remover_accepted_scalar/count Dataset {SCALAR}
/simulation/results/1-pair_insertion_remover_accepted_scalar/mean Group
/simulation/results/1-pair_insertion_remover_accepted_scalar/mean/error Dataset {SCALAR}
/simulation/results/1-pair_insertion_remover_accepted_scalar/mean/value Dataset {SCALAR}

I found an integer number in the 'count' dataset, which I expected to be the number of accepted updates of this type. However, the same number showed up in the 'count' dataset for attempted updates. So, I am not sure what it actually means because not every attempted update will be accepted. Furthermore, how exactly is the 'mean/value' (and the 'mean/error') obtained?

I assume, that a certain number of updates of different types is performed, where the number of attempts of a certain update is determined by an update rate, before the information is passed on to the ALPSCore library and the 'count' is increased by 1. The 'mean/value' then is the mean value of the 'count' times-accumulated data. Is that correct?

Could you implement a counter for the number of attempted, valid and accepted updates for each type of update?

In-source build

In-source build is now explicitly prohibited for CT-HYB (in CMakeLists.txt).
This gives a rise to errors in automatic tests with jenkins.
What is the proper way to avoid the errors?

Compilation failure with g++-4.8

The compilation fails with g++-4.8 and Boost 1.60, but seems to work fine with g++-5.4 and Boost v. 1.58
Although it has been triggered by the update of ALPSCore, it does not seem to be caused by the ALPSCore code. It might be due to a specific combination of Boost 1.60, g++ 4.8 and Eigen 3.3.4 ...

The error is:

In file included from /usr/local/eigen/eigen_3.3.4/Eigen/Core:390:0,
from /usr/local/eigen/eigen_3.3.4/Eigen/Dense:1,
from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/CT-HYB/src/sliding_window/../model/model.hpp:18,
from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/CT-HYB/src/sliding_window/sliding_window.hpp:8,
from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/CT-HYB/src/sliding_window/sliding_window.cpp:1:
/usr/local/eigen/eigen_3.3.4/Eigen/src/Core/arch/CUDA/Half.h:481:8: error: specialization of ‘std::numeric_limitsEigen::half’ after instantiation
struct numeric_limitsEigen::half {
^
/usr/local/eigen/eigen_3.3.4/Eigen/src/Core/arch/CUDA/Half.h:481:8: error: redefinition of ‘struct std::numeric_limitsEigen::half’
In file included from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/boost_1_60_0_b1/include/boost/limits.hpp:19:0,
from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/boost_1_60_0_b1/include/boost/multi_array/index_range.hpp:18,
from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/boost_1_60_0_b1/include/boost/multi_array/base.hpp:23,
from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/boost_1_60_0_b1/include/boost/multi_array.hpp:21,
from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/CT-HYB/src/sliding_window/sliding_window.hpp:4,
from /home/galexv/Work/UMich/ALPSCore/ALPSCore_apps/CT-HYB/src/sliding_window/sliding_window.cpp:1:
/usr/include/c++/4.8/limits:304:12: error: previous definition of ‘struct std::numeric_limitsEigen::half’
struct numeric_limits : public __numeric_limits_base
^

Discussion: replace 'N_spin' and 'N_sites' with 'N_flavors'?

Just an idea for a small discussion:
It might be confusing for users, if there is an 'N_spin' as well as an 'N_sites' value, when the code is actually working on 'flavors'. As far as I understood, the only difference between setting e.g. 'N_spin=2, N_sites=3' and 'N_spin=1, N_sites=6' is the automatic setup of the global spin swap update. If the user does not use the order '1up, 1dn, 2up, 2dn, 3up, 3dn', but e.g. '1up, 2up, 3up, 1dn, 2dn, 3dn', this update does not do what it was intended for. So, in my opinion it would be easier to just use a general flavor number 'N_flavors' and let the user specify the 'swap_vector'. What do you think?

[CI] Compilation fails on Cloudbees

...apparently because it needs more than 1GB of memory:

[  4%] Building CXX object CMakeFiles/alpscore_cthyb.dir/src/solver_real.cpp.o
/home/jenkins/openmpi-1.10.1/bin/mpic++   -DUSE_QUAD_PRECISION -Dalpscore_cthyb_EXPORTS -I/scratch/jenkins/workspace/cthyb-alpscore-devel/build -I/scratch/jenkins/workspace/cthyb-alpscore-devel -I/scratch/jenkins/workspace/cthyb-alpscore-devel/include -I/scratch/jenkins/workspace/cthyb-alpscore-devel/test -isystem /home/jenkins/boost_1_60_0_b1/include -isystem /home/jenkins/eigen-eigen-5a0156e40feb -isystem /scratch/jenkins/workspace/cthyb-alpscore-devel/ALPSCore/install/include -isystem /home/jenkins/hdf5-1.8.13/include  --param ggc-min-expand=20 --param ggc-min-heapsize=32768 -DALPS_GF_DEBUG -g -fPIC   -std=gnu++11 -o CMakeFiles/alpscore_cthyb.dir/src/solver_real.cpp.o -c /scratch/jenkins/workspace/cthyb-alpscore-devel/src/solver_real.cpp
g++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://bugzilla.redhat.com/bugzilla> for instructions.
make[2]: *** [CMakeFiles/alpscore_cthyb.dir/src/solver_real.cpp.o] Error 4
make[2]: Leaving directory `/scratch/jenkins/workspace/cthyb-alpscore-devel/build'
make[1]: *** [CMakeFiles/alpscore_cthyb.dir/all] Error 2
make[1]: Leaving directory `/scratch/jenkins/workspace/cthyb-alpscore-devel/build'
make: *** [all] Error 2
-26.30user 2.44system 0:29.91elapsed 96%CPU (0avgtext+0avgdata 908644maxresident)k
360120inputs+36889outputs (41001major+283515minor)pagefaults 0swaps
Build step 'Execute shell' marked build as failure

@shinaoka : Shall we disable the Jenkins build for now?
(We have discussed this issue in person, so feel free to close as [wontfix].)

If compiler does not support MPI, finding MPI does not work

Here is the relevant part of the CMake error log:

-- Compiler does not support MPI. Trying to find MPI
-- Found MPI_CXX: /usr/lib/libmpi_cxx.so;/usr/lib/libmpi.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libhwloc.so
-- MPI : Found compiler /usr/bin/mpicxx
-- MPI : Using /usr/bin/c++
CMake Error at cmake/EnableMPI.cmake:40 (target_include_directories):
Cannot specify include directories for target "ALPSCoreCTHYB" which is not
built by this project.
Call Stack (most recent call first):
CMakeLists.txt:46 (include)

CMake Error at cmake/EnableMPI.cmake:41 (target_link_libraries):
Cannot specify link libraries for target "ALPSCoreCTHYB" which is not built
by this project.
Call Stack (most recent call first):
CMakeLists.txt:46 (include)

-- Configuring incomplete, errors occurred!

Static linking for cray xc30

As requested by Florian,
I've introduced an CMake option for static linking in the "static-link" branch.
https://github.com/ALPSCore/CT-HYB/tree/static-link

One can switch to static linking by passing " -DCTHYB_BUILD_TYPE=static" to cmake.

I want to test the code.
But, unfortunately, I do not have access to any cray system.
Does anyone has it?
Florian, could you test the code?

BTW, how slow is the cray C++ compiler?

Available simulation results?

Good evening,
I am using ALPSCORE/CT-HYB for the calculation of dynamic susceptibilities. This involves calculating G2, inverting the BSE equation, and performing the analytic continuation. The analytic continuation uses Maxent and thus needs some error information as input data. In order to feed a god estimate of the error to Maxent, I am considering a Jackknife resampling procedure on the output of the simulation. Hence, my questions:
a) is there anything better/easier than this available somewhere within the ALPS project?
b) if not, is the detailed simulation data available in the h5 output file?

Thanks a lot for your help.

Tutorials fail: Segmentation fault

I installed ALPSCore v2.1.0 with Boost 1.58.0 and the newest Eigen from GitHub, make test passes all tests.

Now I cloned CT-HYB and compiled with g++ 5.4.0, which I was only able to do by adding set(CMAKE_CXX_STANDARD 11) to the top of CT-HYB/CMakeLists.txt. Again, make test passes both tests, although this first failed with a segfault, but somehow worked after checking out v1.0.1 and then v1.0.2 again. However, I am not overly confident with tags in git, so I may have done something weird there. Right now I get

$ git status
HEAD detached at v1.0.2
nothing to commit, working directory clean

Running the code with any one of the tutorial inputs fails immediately printing the following error message:

$ /home/mertz/CODES/alpscore/CT-HYB/install/bin/hybmat input.ini
[grundtal:14680] *** Process received signal ***
[grundtal:14680] Signal: Segmentation fault (11)
[grundtal:14680] Signal code: Address not mapped (1)
[grundtal:14680] Failing at address: 0x44000098
[grundtal:14680] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x354b0)[0x149a54cbf4b0]
[grundtal:14680] [ 1] /usr/lib/libmpi.so.12(MPI_Comm_rank+0x3e)[0x149a556422de]
[grundtal:14680] [ 2] /home/mertz/CODES/alpscore/CT-HYB/install/lib/libalpscore_cthyb.so(_ZN4alps5cthyb12MatrixSolverISt7complexIdEE5solveERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE+0xb9)[0x149a56c7f569]
[grundtal:14680] [ 3] /home/mertz/CODES/alpscore/CT-HYB/install/bin/hybmat(main+0x269)[0x411969]
[grundtal:14680] [ 4] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x149a54caa830]
[grundtal:14680] [ 5] /home/mertz/CODES/alpscore/CT-HYB/install/bin/hybmat(_start+0x29)[0x412439]
[grundtal:14680] *** End of error message ***

Does anyone know what I have done wrong?

Worm sampling

Good evening,
I see in the code that some worm sampling capability seems to be implemented. I would be interested in testing it out, for a 2-particle GF calculation, in the non-degenerate 2-band Hubbard model. I have been able to compute some 2p GF with the "standard" sampling, cutting hybridization lines, but the regime which I am targeting now has a very low perturbation order, and thus would require worm. Unfortunately, I could not find any information about the way this feature can be tested/used. What would you recommend?
Thanks a lot.

absence of __complex__ attribute in h5 file

Good morning,
I use ALPSCORE/CT-HYB for computing G1 and G2, and then post process the output data, using C++ code using the same ALPSCORE libraries. In my own C++ code, I read and write code from and to hdf5, using the same process as that used in ALPSCORE/CT-HYB. For example in this code snippet, I save a complex boost multiarray to hdf5:

https://github.com/dombrno/SC_Loop_ALPSCore/blob/eb14d7459b248b7a3d7f01a19c0c68fda22b59cc/src/greens_function.cpp#L207-L225

This seems to me the same exact process used to save, e.g. the G2_LEGENDRE data in ALPSCORE/CT-HYB:

https://github.com/dombrno/CT-HYB-1/blob/adbb7b81f16e85f5d86748649af5fb6bb06cb39a/src/postprocess.hpp#L297-L357

Nevertheless, in the case of my code, the hdf5 dataset ends up being equipped with the attribute complex = 1, while in the case of ALPSCORE/CT-HYB, this attribute is not created.

It is not a big problem, but that leads me to having to use two different procedures for reading data, depending on whether they were generating by ALPSCORE/CT-HYB, or by my own CT-QMC code (2-orbital version of CT-HYB-SEGMENT): if the data comes from ALPSCORE/CT-HYB, I need to recombine the real and imaginary parts into a complex boost::multiarray, while the reading of complex data works direct when coming from my other code (also using ALPSCORE libraries), thanks to the attribute being present.

https://github.com/dombrno/SC_Loop_ALPSCore/blob/eb14d7459b248b7a3d7f01a19c0c68fda22b59cc/src/greens_function.cpp#L244-L269

I am using ALPSCORE c6eddbb9326a82a970a5a73569bc61f82cc1b46f , and ALPSCORE/CT-HYB 26f6acf , i.e. relatively recent versions from end of March.

make test fails when built with g++4.2

Here are error messages from Jenkins.
https://alpscore.ci.cloudbees.com/job/cthyb-alpscore/220/console

When the CT-HYB is built with g++ (GCC) 4.7.2, one of unit tests fails with SegFault.
The test, "unittest_fu" is exactly the same one with that at https://github.com/shinaoka/fastupdate.
To trace back the cause of this crash, I built the same test with g++4.7.4 on my Mac.
But, the test passed successfully.

How can I locate the point of SegFault?
Can I get more information from Jenkins?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.