richard-evans / vampire Goto Github PK
View Code? Open in Web Editor NEWAtomistic simulator for magnetic materials
License: GNU General Public License v2.0
Atomistic simulator for magnetic materials
License: GNU General Public License v2.0
getting this error when running make command
/hdr -I./src/qvoronoi -std=c++0x src/dipole/mpi.cpp
src/dipole/mpi.cpp: In function 'int dipole::internal::send_cells_demag_factor(std::vector&, std::vector&, int)':
src/dipole/mpi.cpp:562:20: error: invalid conversion from 'const void*' to 'void*' [-fpermissive]
MPI_Isend(&num_send_cells, 1, MPI_INT, 0, 120, MPI_COMM_WORLD, &requests.back());
^~~~~~~~~~~~~~~
In file included from ./hdr/vmpi.hpp:36:0,
from ./hdr/vio.hpp:40,
from src/dipole/mpi.cpp:24:
/usr/local/include/mpi.h:577:5: note: initializing argument 1 of 'int MPI_Isend(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Request*)'
int MPI_Isend(void*, int, MPI_Datatype, int, int, MPI_Comm, MPI_Request *);
^~~~~~~~~
makefile:247: recipe for target 'obj/dipole/mpi_mpi.o' failed
Hi,
We have been using Vampire for a while now, for undergraduate projects mainly. We managed to compile the serial, parallel, and Cuda versions.
This week we had to reinstall Linux in our GPU unit, and after that, we got an odd error while installing the Cuda version. The serial and parallel versions were compiled without issues. However, when compiling the Cuda version, an error appeared:
nvcc -DCOMP='"NVCC 23_0"' src/cuda/monte_carlo.cu -dc -o obj/cuda/monte_carlo.o -O3 -I./hdr -DCUDA -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_50,code=compute_50 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_61,code=compute_61 -gencode arch=compute_70,code=compute_70 -gencode arch=compute_75,code=compute_75 --use_fast_math -ftz=true -std=c++14 -Wno-deprecated-gpu-targets -DCUDA_DP -DCUDA_MATRIX=CSR
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with '...':
435 | function(_Functor&& __f)
| ^
/usr/include/c++/11/bits/std_function.h:435:145: note: '_ArgTypes'
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with '...':
530 | operator=(_Functor&& __f)
| ^
/usr/include/c++/11/bits/std_function.h:530:146: note: '_ArgTypes'
make: *** [src/cuda/makefile:58: obj/cuda/monte_carlo.o] Error 1
We have:
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
CUDA:
Driver Version: 510.85.02
CUDA Version: 11.6
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Fri_Dec_17_18:16:03_PST_2021
Cuda compilation tools, release 11.6, V11.6.55
Build cuda_11.6.r11.6/compiler.30794723_0
GCC:
gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Can anyone help us with this?
Dear Richard,
Unfortunately I was not able to visit workshop.
Is it it possible to put slides for the seconds workshop to site as it was done for the first one?
Hi Richard, I was thinking to tweak VDC to allow the possibility for the user to specify which snapshots to process. Is it OK?
I was wondering if the developers are willing to port their application installation setup to Spack Package Manager.
It would be really helpful in setting up the software on cluster setups.
In version 6.0.0, the control statement 'sim:monte-carlo-algorithm' can not set in the input file.
the error is "Error - Unknown control statement 'sim:monte-carlo-algorithm' on line 49 of input file
Fatal error: Aborting program. See log file for details."
I tried to compire parallel from source for both master
and develop
branch.
I installed them in a school server so the environments are all loaded by module
as shown below:
module load compiler/intel/15
module load intel-mkl/15
module load openmpi/1.10
master
branch has no problem with make parallel
However, in develop
branch make parallel
gives:
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/bits/basic_string.h:2610: note: std::string std::to_string(long long unsigned int)
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/bits/basic_string.h:2616: note: std::string std::to_string(long double)
make: *** [obj/statistics/energy_mpi.o] Error 1
and then stops.
Can anyone fix this?
I want to learn the third advanced VAMPIRE workshop Further information, to simulate the Skyrmion thermal interaction .
The input file parser in vio.cpp contains over 200 if/else statements and accompanying strings which repeat large amounts of code and is inefficient for adding new parameters, and generally a mess. A new framework is needed to make adding new input parameters much easier and robust.
I tried to compile the opencl version of Vampire using the latest develop branch. However, the compile failed, the error messages are:
g++ -std=c++0x -DCOMP='"GNU C++ Compiler"' -c -o obj/gpu/initialize.o -O3 -mtune=native -funroll-all-loops -fexpensive-optimizations -funroll-loops -I./hdr -I./src/qvoronoi -std=c++0x -std=c++11 -DOPENCL src/gpu/initialize.cpp
src/gpu/initialize.cpp: In function 'void gpu::initialize_dipole()':
src/gpu/initialize.cpp:57:19: error: 'initialize_dipole' is not a member of 'vopencl'; did you mean 'gpu::initialize_dipole'?
57 | vopencl::initialize_dipole();
| ^~~~~~~~~~~~~~~~~
src/gpu/initialize.cpp:49:9: note: 'gpu::initialize_dipole' declared here
49 | void initialize_dipole(){
| ^~~~~~~~~~~~~~~~~
make: *** [makefile:199: obj/gpu/initialize.o] Error 1
I'm quite sure that the needed opencl headers are corrected installed, can you take a look at this?
Problem when reducing number of atoms at tend of cs:create() function when system has grains
Dear developer,
I have a question about the exchange parameter J in Curie temperature simulation.
A simple equation H = -J * Si *Sj, we can see J is relate to the spin moment, however, when I change the value of atomic spin moment (which means S is different) and keep J, the Magnetization vs Temperature are same, which indicates the same Tc. This is not right.
Am I ignore some tags? Or the spin S is fixed in software?
Thanks in advance.
Lei
Hello there,
I was running the program unit-cell-creator.cpp
for
const int num_materials=2;
the results are good
but when I change the value form 2 to 1,
it shows core dumped:
terminate called after throwing an instance of 'std::out_of_range'
what(): vector::_M_range_check: __n (which is 1) >= this->size() (which is 1)
Aborted (core dumped)
Please help
Is able to use cif
file to generate ucf
file?
ucf
manually, it's very easy to make mistake.ucf
manually is very painful.cif
file easily and it's possible to use cif
file to generate ucf
file https://github.com/xbunax/vampire_code.git
this is the demo that I write which is writen by python and pymatgen package and this demo can use cif
file to generate cif
file.I wish this feature can be taken into consideration, thanks.
cu::grid_size
variable at initialisation (according to the plan)That's one thing, let's see what else comes up and register it here.
in the file of initialise_variables.cpp, at line 564:
"if(sim::second_order_uniaxial_anisotropy==true)" should be "if (sixth_order_uniaxial_anisotropy==true)"
by the way, thanks for your open source work
Dear Richard,
Thanks a lot for your hard work on Vampire.
I'm wondering what are the units of "output:material-mean-susceptibility".
I tried to perform basic Curie-Weiss fit of output and resulting moments (extracted from Curie constant) was not nearly close to expected value of "atomic-spin-moment" from .mat file.
I know that this question was discussed quite a lot in google groups, but I still have some complications with defining of exchange interaction in .ucf files.
Lets say I have a simple 3d AFM with exchange interactions known from inelastic neutron scattering experiments. These exchange interactions usually published in meV / link.
Should I just convert meV -> J (by multiplication to 1.6021766*10^-22) or any additional normalization by number of interactions or unit cell volume should be performed?
I tried to use direct conversion meV -> J and resulting Neel temperature was more then 10 times different from expected one.
Thanks in advance!
vdc --vtk copies the first spin "num_atoms" times, then it starts adding the remaining spins.
It seems the issue is from "vdc::atoms_list".
Replacing it by "sliced_atoms_list" on lines 69, 72 solves the issue.
- for(size_t i=0; i < vdc::atoms_list.size(); i++){
+ for(size_t i=0; i < vdc::sliced_atoms_list.size(); i++){
- unsigned int atom = vdc::atoms_list[i];
+ unsigned int atom = vdc::sliced_atoms_list[i];
Hello everyone,
Among the source code is a file called mc-mpi.cpp. Is this the MPI implementation for Monte Carlo simulations?
But I performed a Monte Carlo simulation to calculate the Curie temperature using vampire-parallel code, which could not work.
I would like to run the parallel Monte-Carlo simulations by splitting the time series into many small time series on different MPI threads. Is it possible? Can I modify the source code?
Thanks!
I am designing 2D ferromagnetic insulators. For testing, I construct a 2D system with only isotropic Heisenberg term. but it yields the curie temperature, which i hope to be very close to zero, to be nearly 30K. I'm wondering where is wrong in my input file.
Thank you
-.mat
material:num-materials=1
#---------------------------------------------------
#---------------------------------------------------
material[1]:material-name=Cr
material[1]:damping-constant=0.01
material[1]:atomic-spin-moment=1 !muB
material[1]:material-element=Cr
material[1]:uniaxial-anisotropy-constant=0.0
-.ucf
3.48792 3.48792 26.5639534098312744
1.0000 0.0000 0.0000
0.0000 1.0000 0.0000
0.0000 0.0000 1.0000
1 1
0 0.50000 0.50000 0.44055
4 0
0 0 0 1 0 0 4.066e-22
1 0 0 0 1 0 4.066e-22
2 0 0 -1 0 0 4.066e-22
3 0 0 0 -1 0 4.066e-22
-input
#------------------------------------------
#------------------------------------------
create:full
create:periodic-boundaries-x
create:periodic-boundaries-y
create:periodic-boundaries-z
#------------------------------------------
material:file= test.mat
material:unit-cell-file = "test.ucf"
#------------------------------------------
#------------------------------------------
dimensions:unit-cell-size-x = 3.48792 !A
dimensions:unit-cell-size-y = 3.48792 !A
dimensions:unit-cell-size-z = 26.5639534098312744 !A
dimensions:system-size-x = 15.0 !nm
dimensions:system-size-y = 15.0 !nm
dimensions:system-size-z = 2.6 !nm
#------------------------------------------
##------------------------------------------
sim:minimum-temperature=0
sim:maximum-temperature=50
sim:temperature-increment=0.1
sim:time-steps-increment=1
sim:equilibration-time-steps=8000
sim:loop-time-steps=2000
sim:time-step=1e-16
##------------------------------------------
##------------------------------------------
sim:program=curie-temperature
sim:integrator=monte-carlo
#sim:two-temperature-electron-heat-capacity
##------------------------------------------
##------------------------------------------
output:temperature
output:mean-magnetisation-length
output:mean-susceptibility
screen:temperature
screen:mean-magnetisation-length
screen:mean-susceptibility
Dear developer,
I set "sim:enable-dipole-fields" in the input file and got this error message Error - Unknown control statement 'sim:enable-dipole-fields'
. How to fix it?
The program is aborted saying that the exchange matrix is greater than 1.0e18 but my exchange matrix is 1.512e-12.
20-04-2022 [14:48:35] Version : 6.0.0
20-04-2022 [14:48:35] Githash : 4c9651daecc86c3e6b6df0c8055fcd9e89fd900c
20-04-2022 [14:48:35] Opening main input file "input".
20-04-2022 [14:48:35] Parsing system parameters from main input file.
20-04-2022 [14:48:35] Opening material file "rev.mat".
20-04-2022 [14:48:35] Parsing material file for parameters.
20-04-2022 [14:48:35] Error: material:exchange-matrix on line 10 of material file must be in the range < +/- 1.0e18.
20-04-2022 [14:48:35] Fatal error: Aborting program.
But my material file is:
#---------------------------------------------------
# Number of Materials
#---------------------------------------------------
material:num-materials=1
#---------------------------------------------------
# Material 1
#---------------------------------------------------
material[1]:material-name=rev
material[1]:damping-constant=1.0
material[1]:exchange-matrix[1]=1.512e-12
material[1]:atomic-spin-moment=0.7 !muB
material[1]:second-order-uniaxial-anisotropy-constant=0
material[1]:material-element=Ag
material[1]:minimum-height=0.0
material[1]:maximum-height=1.0
Hi,
I'm trying to do a simulation where we try to move the Skyrmions and domain walls with a spin-polarized current via STT. How can I apply a spin-current? I have seen papers where they did the same thing with VAMPIRE, I looked in the manual and also in the workshop examples. I could not find anything.
Thanks for any assistance!
Hi,
I have installed the vampire on Linux through binary as well as source codes, but now I am wondering how to run the first calculation.
There are two files named input and Co.mat, and I ran these as executables, then it gave the following errors.
computer@linux$ ./input
./input: line 10: dimensions:unit-cell-size: command not found
./input: line 11: dimensions:system-size-x: command not found
./input: line 12: dimensions:system-size-y: command not found
./input: line 13: dimensions:system-size-z: command not found....
Kindly help us.
On the develop branch, when creating a multi-element crystal using a ucf file (such as an AFM material), vampire keeps only the first material and discards the rest.
An example is the attached output of "3c_antiferromagnets" in the workshop.
output.txt
Could you please git tag release-3.0 and release-4.0 branches
Whenever I load checkpoint files there is a jump in the data for any mean, like mean-magnetisation-length. The image attached shows the example in tests/time-series/, slightly modified, stopped twice and continued from checkpoint files, at the kinks in the mean-magnetisation-length.
It seems the data counter (stats::data_counter) is reset even when checkpoint files are loaded, or just initialized to 0. I can't quite figure it out but it would be great if this could be fixed soon.
Input and material file: (had to change the extension to upload them)
input.txt
Co.txt
A new line character is missing in the makefile. The following patch is needed. Otherwise a lot of "No such file or directory" error pops up when building the parallel-intel
target.
--- makefile.orig 2019-01-25 10:14:55.000000000 -0700
+++ makefile 2019-04-23 23:49:46.392274998 -0700
@@ -250,7 +250,9 @@
$(MPICC) $(ICC_LDFLAGS) $(LIBS) $(MPI_ICC_OBJECTS) -o $(PEXECUTABLE)
$(MPI_ICC_OBJECTS): obj/%_i_mpi.o: src/%.cpp
- $(MPICC) -c -o $@ $(ICC_CFLAGS) $<intel: $(MPI_ICC_OBJECTS)
+ $(MPICC) -c -o $@ $(ICC_CFLAGS) $<
+
+intel: $(MPI_ICC_OBJECTS)
parallel-cray: $(MPI_CRAY_OBJECTS)
$(MPICC) $(CRAY_LDFLAGS) $(LIBS) $(MPI_CRAY_OBJECTS) -o $(PEXECUTABLE)
Also, -std=c++0x
flag is required for icc. The mpi version of icc should be mpiicpc
. I would suggest the following change as well:
--- makefile.orig 2019-01-25 10:14:55.000000000 -0700
+++ makefile 2019-04-23 23:52:16.000000000 -0700
@@ -13,12 +13,13 @@
#export MPICH_CXX=g++
#export MPICH_CXX=bgxlc++
# Compilers
-ICC=icc -DCOMP='"Intel C++ Compiler"'
+ICC=icc -std=c++0x -DCOMP='"Intel C++ Compiler"'
GCC=g++ -std=c++0x -DCOMP='"GNU C++ Compiler"'
LLVM=g++ -DCOMP='"LLVM C++ Compiler"'
PCC=pathCC -DCOMP='"Pathscale C++ Compiler"'
IBM=bgxlc++ -DCOMP='"IBM XLC++ Compiler"'
MPICC=mpicxx -DMPICF
+MPIICC=mpiicpc -std=c++0x -DMPICF
CCC_CFLAGS=-I./hdr -I./src/qvoronoi -O0
CCC_LDFLAGS=-I./hdr -I./src/qvoronoi -O0
@@ -191,7 +192,7 @@
$(GCC) -c -o $@ $(GCC_CFLAGS) $(OPTIONS) $<
serial-intel: $(ICC_OBJECTS)
- $(ICC) $(ICC_LDFLAGS) $(LIBS) $(ICC_OBJECTS) -o $(EXECUTABLE)
+ $(ICC) $(ICC_LDFLAGS) $(LIBS) $(ICC_OBJECTS) -o $(EXECUTABLE)-intel
$(ICC_OBJECTS): obj/%_i.o: src/%.cpp
$(ICC) -c -o $@ $(ICC_CFLAGS) $(OPTIONS) $<
@@ -221,7 +222,7 @@
$(LLVM) -c -o $@ $(LLVM_DBCFLAGS) $(OPTIONS) $<
intel-debug: $(ICCDB_OBJECTS)
- $(ICC) $(ICC_DBLFLAGS) $(LIBS) $(ICCDB_OBJECTS) -o $(EXECUTABLE)
+ $(ICC) $(ICC_DBLFLAGS) $(LIBS) $(ICCDB_OBJECTS) -o $(EXECUTABLE)-intel-debug
$(ICCDB_OBJECTS): obj/%_idb.o: src/%.cpp
$(ICC) -c -o $@ $(ICC_DBCFLAGS) $(OPTIONS) $<
@@ -247,10 +248,12 @@
$(MPICC) -c -o $@ $(GCC_CFLAGS) $(OPTIONS) $<
parallel-intel: $(MPI_ICC_OBJECTS)
- $(MPICC) $(ICC_LDFLAGS) $(LIBS) $(MPI_ICC_OBJECTS) -o $(PEXECUTABLE)
+ $(MPIICC) $(ICC_LDFLAGS) $(LIBS) $(MPI_ICC_OBJECTS) -o $(PEXECUTABLE)-intel
$(MPI_ICC_OBJECTS): obj/%_i_mpi.o: src/%.cpp
- $(MPICC) -c -o $@ $(ICC_CFLAGS) $<intel: $(MPI_ICC_OBJECTS)
+ $(MPIICC) -c -o $@ $(ICC_CFLAGS) $<
+
+intel: $(MPI_ICC_OBJECTS)
parallel-cray: $(MPI_CRAY_OBJECTS)
$(MPICC) $(CRAY_LDFLAGS) $(LIBS) $(MPI_CRAY_OBJECTS) -o $(PEXECUTABLE)
I wrote a ucf file including the tesorial exchange interactions, and then used the material file with the command "material:biquadratic-exchange-matrix = **e-22". When I started running vampire, no error was reported, but I found the script "Using gerenic/normalised form of exchange interaction with 0 interactions" during the program initialting. Could you please tell me if the biquadratic term was correctly recognized by the program? I prefer to write the biquadratic term as a part of the tesorial exchange, how could I do it?
Need to look at feasibility, but should also reduce code size due to removal of CUSP, and increase performance
I try to use exchange:ucc-exchange-parameter" to simulate effect from asymmetric strain, but this control statement cannot be recognized. Could someone tell me what statement should I use now?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.