Git Product home page Git Product logo

aiida-quantumespresso's Introduction

aiida-quantumespresso

PyPI version PyPI pyversions Build Status Docs status

This is the official AiiDA plugin for Quantum ESPRESSO.

Compatibility matrix

The matrix below assumes the user always install the latest patch release of the specified minor version, which is recommended.

Plugin AiiDA Python Quantum ESPRESSO
v4.3 < v5.0 Compatibility for v4.0 PyPI pyversions Quantum ESPRESSO compatibility
v4.0 < v4.3 Compatibility for v4.0 PyPI pyversions Quantum ESPRESSO compatibility
v3.5 < v4.0 Compatibility for v3.5 PyPI pyversions Quantum ESPRESSO compatibility
v3.4 < v3.5 Compatibility for v3.4 PyPI pyversions Quantum ESPRESSO compatibility
v3.3 < v3.4 Compatibility for v3.3 PyPI pyversions Quantum ESPRESSO compatibility
v3.1 < v3.3 Compatibility for v3.1 PyPI pyversions Quantum ESPRESSO compatibility
v3.0 < v3.1 Compatibility for v3.0 PyPI pyversions Quantum ESPRESSO compatibility
v2.0 < v3.0 Compatibility for v2.0 PyPI pyversions Quantum ESPRESSO compatibility

Starting from aiida-quantumespresso==4.0, the last three minor versions of Quantum ESPRESSO are supported. Older versions are supported up to a maximum of two years.

Installation

To install from PyPI, simply execute:

pip install aiida-quantumespresso

or when installing from source:

git clone https://github.com/aiidateam/aiida-quantumespresso
pip install aiida-quantumespresso

Command line interface tool

The plugin comes with a builtin CLI tool: aiida-quantumespresso. This tool is built using the click library and supports tab-completion. To enable it, add the following to your shell loading script, e.g. the .bashrc or virtual environment activate script:

eval "$(_AIIDA_QUANTUMESPRESSO_COMPLETE=source aiida-quantumespresso)"

The tool comes with various sub commands, for example to quickly launch some calculations and workchains For example, to launch a test PwCalculation you can run the following command:

aiida-quantumespresso calculation launch pw -X pw-v6.1 -F SSSP/1.1/PBE/efficiency

Note that this requires the code pw-v6.1 and pseudopotential family SSSP/1.1/PBE/efficiency to be configured. See the pseudopotentials section on how to install them easily. Each command has a fully documented command line interface, which can be printed to screen with the help flag:

aiida-quantumespresso calculation launch ph --help

which should print something like the following:

Usage: aiida-quantumespresso calculation launch ph [OPTIONS]

  Run a PhCalculation.

Options:
  -X, --code CODE                 A single code identified by its ID, UUID or
                                  label.  [required]
  -C, --calculation CALCULATION   A single calculation identified by its ID or
                                  UUID.  [required]
  -k, --kpoints-mesh INTEGER...   The number of points in the kpoint mesh
                                  along each basis vector.  [default: 1, 1, 1]
  -m, --max-num-machines INTEGER  The maximum number of machines (nodes) to
                                  use for the calculations.  [default: 1]
  -w, --max-wallclock-seconds INTEGER
                                  the maximum wallclock time in seconds to set
                                  for the calculations.  [default: 1800]
  -i, --with-mpi                  Run the calculations with MPI enabled.
                                  [default: False]
  -d, --daemon                    Submit the process to the daemon instead of
                                  running it locally.  [default: False]
  -h, --help                      Show this message and exit.

Pseudopotentials

Pseudopotentials are installed and managed through the aiida-pseudo plugin. The easiest way to install pseudopotentials, is to install a version of the SSSP through the CLI of aiida-pseudo. Simply run

aiida-pseudo install sssp

to install the default SSSP version. List the installed pseudopotential families with the command aiida-pseudo list. You can then use the name of any family in the command line using the -F flag.

Development

Running tests

To run the tests, simply clone and install the package locally with the [tests] optional dependencies:

git clone https://github.com/aiidateam/aiida-quantumespresso .
cd aiida-quantumespresso
pip install -e .[tests]  # install extra dependencies for test
pytest # run tests

You can also use tox to run the test set. Here you can also use the -e option to specify the Python version for the test run:

pip install tox
tox -e py39 -- tests/calculations/test_pw.py

Pre-commit

To contribute to this repository, please enable pre-commit so the code in commits are conform to the standards. Simply install the repository with the pre-commit extra dependencies:

cd aiida-quantumespresso
pip install -e .[pre-commit]
pre-commit install

License

The aiida-quantumespresso plugin package is released under the MIT license. See the LICENSE.txt file for more details.

Acknowledgements

We acknowledge support from:

aiida-quantumespresso's People

Contributors

andresortegaguerrero avatar bastonero avatar borellim avatar chrisjsewell avatar crivella avatar d-tomerini avatar dropd avatar eimrek avatar elsapassaro avatar giovannipizzi avatar greschd avatar lbotsch avatar ltalirz avatar mbercx avatar mhdzbert avatar mikibonacci avatar mkotiuga avatar muhrin avatar normarivano avatar odarbelaeze avatar pnogillespie avatar qiaojunfeng avatar ramirezfranciscof avatar rikigigi avatar sphuber avatar sponce24 avatar superstar54 avatar unkcpz avatar yakutovicha avatar zhubonan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aiida-quantumespresso's Issues

Parsing of PwCalculation fails for vc-relax run with LDA+U with v5.0.2

When running pw.x (tested on v5.0.2) in vc-relax mode with LDA+U the parser will use the Forces acting on atoms mark to start parsing atomic forces. However, due to the LDA+U switch, directly after this line, output related to Hubbard U is printed instead of the atomic forces, causing the parsing to fail.

This seems to be particular to v5.0.2 and so it might not be worth to fix it as it is such an old version. The problem is verified to be non existent for Quantum ESPRESSO v6.1

Add option to parse atomic occupations for DFT+U calculations

For DFT+U calculations, the standard output will also print the electronic occupations of the Hubbard sites. The output has the following format:

atom    1   Tr[ns(na)] =   7.00000
    eigenvalues: 
  0.700  0.700  0.700  0.700  0.700
    eigenvectors:
  1.000  0.000  0.000  0.000  0.000
  0.000  1.000  0.000  0.000  0.000
  0.000  0.000  1.000  0.000  0.000
  0.000  0.000  0.000  1.000  0.000
  0.000  0.000  0.000  0.000  1.000
    occupations:
  0.700  0.000  0.000  0.000  0.000
  0.000  0.700  0.000  0.000  0.000
  0.000  0.000  0.700  0.000  0.000
  0.000  0.000  0.000  0.700  0.000
  0.000  0.000  0.000  0.000  0.700

This is printed for each Hubbard site. The number at the end of the line containing Tr[ns(na)] is the number of electrons that can be associated with that atomic site. It would be nice to provide a flag for the settings input node, that when set will trigger the parsing of these numbers in a atomic_occupations ParameterData output node.

Parsing of PwCalculation broken for Quantum ESPRESSO 6.2

The output of the volume in the stdout of pw.x has changed in 6.2 to include the volume also in units of Angstrom cubed.

447.46796 a.u.^3 (    66.30791 Ang^3 )

This break the parsing string for the volume in aiida_quantumespresso/parsers/raw_parser_pw.py on line 1136:

volume = float(line.split('=')[1].split('(a.u.)^3')[0])

This is probably one of many format changes that will break parsing of PwCalculation run with 6.2

clean_workdir in combination with final_scf crashes for PwRelaxWorkChain

When one passes both final_scf and clean_workdir to the PwRelaxWorkChain the final scf calculation would fail. This is because the clean_workdir input was just funneled to the PwBaseWorkChain, each one of whic would therefore be cleaned as soon as it was finished, but the workchain that would run the final scf calculation would depend on the remote data of the previous workchain, which would already have been cleaned. To fix this, the clean_workdir option should not be passed to the PwBaseWorkChain but the PwRelaxWorkChain itself should take care of cleaning all the calculations of its sub workchains at the end and only at the end of its own execution

Incorrect method of retrieving last PwCalculation from PwBaseWorkChain in the PwBandsWorkChain

In the results step of the PwBandsWorkChain I need to retrieve the final PwCalculation of the PwBaseWorkChain to obtain the final BandsData and do so by calling get_outputs and specifying the link type to be LinkType.CALL, which returns a list and then getting the first element. However, if the base workchain launched multiple calculations, the retrieved calculation may not be the last and successful one.

Q2r plugin fails to load the ForceconstantsData class

We get SUBMISSIONFAILED upon submission of any q2r calculation.


474031: SUBMISSIONFAILED
*** Scheduler output: N/A
*** Scheduler errors: N/A
*** 1 LOG MESSAGES:
+-> ERROR at 2017-10-26 15:56:21.204744+00:00
| Submission of calc 474031 failed, check also the log file! Traceback: Traceback (most recent call last):
| File "/home/aiida/codes/AiiDA/aiida_core/aiida/daemon/execmanager.py", line 475, in submit_calc
| folder, use_unstored_links=False)
| File "/home/aiida/codes/AiiDA/aiida_core/aiida/orm/implementation/general/calculation/job/init.py", line 1452, in _presubmit
| FileSubclass = DataFactory(subclassname)
| File "/home/aiida/codes/AiiDA/aiida_core/aiida/orm/utils.py", line 41, in DataFactory
| return BaseFactory(module, Data, "aiida.orm.data")
| File "/home/aiida/codes/AiiDA/aiida_core/aiida/common/pluginloader.py", line 178, in BaseFactory
| return get_plugin(category, module)
| File "/home/aiida/codes/AiiDA/aiida_core/aiida/common/pluginloader.py", line 111, in get_plugin
| "No plugin named '{}' found for '{}'".format(name, category))
| MissingPluginError: No plugin named 'forceconstants' found for 'data'

Output node documentation

Could you please somewhere document how a standard output node looks like?

I.e just copy paste a standard example of a ParameterData node for a PW (and others) calculation in the docs.
That way other plugin developers can stay close to the QE output nodes, (warnings have the same structure, or certain key names are the same). It is not feasible to look in the plugin in code, or run a QE calculation to get this. Also this way users can discuss what might be missing.

Thanks!

Change FAILED status of PwCalculation that terminated due to maximum CPU time being exceeded

This issue was originally opened in the aiida_core repository and is now migrated here. Below the original discussion:

@lekah If the PwParser (or BasicPwParser) finds the string 'Maximum CPU time exceeded', the calculation is classified as failed.
When running MD, one should set the flag max_seconds so that the pw can exit gracefully, and there's nothing wrong with that. One does as many steps as the allocation or scheduler system allows.

So there should be no classification as failed for md calculations.
Or maybe this warning should be a minor warning for all calculations?

@nmounet If the PwParser (or BasicPwParser) finds the string 'Maximum CPU time exceeded', the calculation is classified as failed.
When running MD, one should set the flag max_seconds so that the pw can exit gracefully, and there's nothing wrong with that. One does as many steps as the allocation or scheduler system allows.

So there should be no classification as failed for md calculations.
Or maybe this warning should be a minor warning for all calculations?

@lekah Adding another state could solve that issue. But I think that it should be the job of the workflow to decide whether to run another time, to restart or to exit in a controlled way. The parser gets all the warnings, so the information is there.
I put this as a point to debate: A calculation should only be considered failed if it did not produce parse-able output.

One more thing to consider: It's in principle possible to set a calculation from 'FINISHED' to 'FAILED', but the reverse is not the case. If we let a calculation fail, it is - to the average user - non-reversible and he can't use that calculation to restart any more. In the case of long calculations (which is the case when the max cpu time is reached) that is catastrophic.

@nmounet I still think it's important to have names that reflects clearly the real state of a calculation. It's then much easier to program a workflow. To me, "FINISHED" means that it's finished and thus there is no point in restarting it.
As for the second remark: a FAILED calculation CAN be restarted (in the pw plugin, you just need to put "force_restart" to True).

@giovannipizzi I tend to agree more with Leonid (which is in line with some previous discussions with Boris). We should try to stick to the calculation state = FAILED only if the parser really could not understand anything out of it. Finished in this context should mean 'finished', not implicitly 'finished correctly'. We can think to have special state 'FINISHED*' meaning that the parser wanted to highlight that this calculation requires attention, but I'm not sure that's the best way. Probably, we can just show it as FINISHED* if there is at least one warning, but nothing more.

@lekah I agree w. Nicolas that states should be very clear indicators. In that sense, it doesn't make sense (for me) that you can restart from a "failed" calculation. If it has failed, there is something wrong, and restarting from it could also be said to break the provenance. Since there's many reasons why something can fail, the result is in not reproducible! Also, in the new workflow system, the workflow exits when a calculation has failed (Issue #261). This only makes sense if a calculation being failed really means there's something terribly wrong! We need to be consistent...

@nmounet I insist, but what about a "PARSED" state? It's not FINISHED (which the
ultimate state of completion of a calculation), it is not FAILED because
it produced something, so it is PARSED.

Implement automatic_parallelization for the workflows

The old PwWorkflow provided an input automatic_parallelization which would expect three keys:

max_wall_time_seconds
target_time_seconds
max_num_machines

The workflow would then run an initial calculation to determine some dimensional numbers of the problem and determine a set of command line and parallelization settings

PARSINGFAILED should not be treated as an unexpected state in PwBaseWorkChain

Sometimes a calculation can end up in a PARSINGFAILED state because of problems with the cluster. For example the scratch could be temporarily unavailable, leading to corrupted or empty output files. In this case, the calculation should not be treated as unrecoverable, but the workchain should simply resubmit the calculation. This should also probably implement some exponential backoff schema to give the original problem to fix itself

Implement robust error codes for parser warnings

Currently, the parser writes warnings as strings to a dictionary in the parser_warnings key of the ParameterData node. Workflows have to match these strings in order to determine the fate of the calculation and what needs to be done. This is a fragile method and it would be much better if the error messages could be replaced with well defined error codes, potentially with the strings as error details. Error handling can then be done at the hand of well defined error codes.

Add parsed occupation numbers by default to PwCalculation outputs

The occupation numbers after a PwCalculation are often required for the calculation analysis and preparation of calculation restarts and should be parsed by default into a ArrayData node. To do this in the correct way, this requires a change in the CalcInfo data structure of aiida_core so this issue has to be solved first, before this can be implemented in aiida-quantumespresso.

PW Parser does not create a BandsData node among output of 'bands' calculation - pw 6.1

I submit a bands calculation calculation, it runs correctly, but among the output dict I cannot not find a BandsData connected to the calculation via an output_structure link. I made sure I had also_bands = True` in the input settings. Ex, in verdi shell:

In [15]: calc = load_node(5258)

In [16]: calc.inp.parameters.get_dict()['CONTROL']['calculation']
Out[16]: u'bands'

In [17]: calc.inp.settings.get_dict()
Out[17]: {u'also_bands': True}

In [18]: calc.get_outputs_dict()
Out[18]:
{u'output_array': <ArrayData: uuid: 89c67565-efb0-440a-9024-66c24b5fae6b (pk: 5262)>,
u'output_array_5262': <ArrayData: uuid: 89c67565-efb0-440a-9024-66c24b5fae6b (pk: 5262)>,
u'output_parameters': <ParameterData: uuid: b8c09955-de29-48ff-8df6-d103fae78d20 (pk: 5261)>,
u'output_parameters_5261': <ParameterData: uuid: b8c09955-de29-48ff-8df6-d103fae78d20 (pk: 5261)>,
u'remote_folder': <RemoteData: uuid: 4db83e64-891f-4e9a-8681-93479e324a2a (pk: 5259)>,
u'remote_folder_5259': <RemoteData: uuid: 4db83e64-891f-4e9a-8681-93479e324a2a (pk: 5259)>,
u'retrieved': <FolderData: uuid: 6bf387c6-fec3-4fa8-a67b-8dc266b5dd6b (pk: 5260)>,
u'retrieved_5260': <FolderData: uuid: 6bf387c6-fec3-4fa8-a67b-8dc266b5dd6b (pk: 5260)>}

System Configuration:

pw 6.1
AiiDA 0.9.0,
default qe plugin (i.e. the one shipped checking out tag v0.9.0 of aiida, not the one installed with pip install aiida-quantumespresso)

Use `parent_folder` instead of `parent_calc` for PhBaseWorkChain

The current PhBaseWorkChain uses parent_calc as an input for the restart calculation, however, because Calculation nodes could originally not be inputs to WorkCalculation nodes, a FrozenDict is wrapped around it. This complicates the provenance graph a bit. Since the PhCalculation also accepts a RemoteFolder as parent_folder it is better to have the PhBaseWorkChain also accept this as input

The clean step is not effectuated when a workchain aborts

When a PwRelaxWorkChain or PwBaseWorkChain aborts through a abort_nowait or abort call in one of the steps of the outline, the final clean step will never be reached. As a results, the work directories of any calculations that the workchain may have called directly or indirectly, will not be cleaned, even if the user explicitly set clean_workdir to True. The solution would be to override one of the on_transition methods of the Process class that will always be called at the termination of the Process and have the cleaning logic called in there. This will ensure that calculation work directories are always cleaned just before the Process terminates.

Convert CLI test scripts to use click

Currently all the CLI scripts use argparse but moving to click would simplify the maintenance and the input validation as it can be handled with pre defined click options

Remove parsing state checks from calculation parsers

As long as parse_with_retrieve does not store any nodes, this is just as safe and more convenient.

The check appears in at least the following locations, sometimes commented out (can safely be removed too):

Add 'settings' key to store kpoint files in repository when parsing bands

In PR #36 the code was merged that deprecates the also_bands key in favor of always parsing the bands by default, but the k-point eigenvalue xml files that were used for the parsing are no longer stored in the repository. We should provide an optional key to switch to the old behavior to store the band files in the repository.

I would need kresolveddos in projwfc. It is extremely useful

In the following lines of code


kresolveddos is disabled. Would it be possible to enable it?

this feature is quite useful in QE since it allows, from the data contained in the .xml files that will be produced in the .save directory, to highlight in a bandstructure contributions coming form a specific set of atomic orbitals.

It would be great to allow the feature in AiiDA, I can provide QE examples if needed.

Kind regards

Carlo Pignedoli

Define only_initialization option to PhBaseWorkChain

Exposing a dedicated input only_initialization to the PhBaseWorkChain is better than requiring the user to set ONLY_INITIALIZATION = True in the settings input node. This will encourage the user to also launch a PhBaseWorkChain for an initialization calculation instead of a bare PhCalculation which means that basic error handling is taken care of.

Fail to parse the BandsData in Pwimmigrant

Dear AiiDA Team,

I am using aiida 0.10.rc3.

I followed the tutorial of pwimmigrant of aiida_quantumespresso plugin. The script worked just fine. And the scf.in, scf.out, bands.in, bands.out was successfully imported. However, I didn’t find the BandData in the output nodes.

verdi calculation list -a
307 1h ago FINISHED itplin quantumespresso.pwimmigrant
313 1h ago FINISHED itplin quantumespresso.pwimmigrant

Where node 307 is the scf calculation and 313 is the band calculation.

(aiidapy-new) aiida-topo$verdi calculation show 313


type PwimmigrantCalculation
pk 313
uuid 3ab7bee3-33db-48d2-b10d-167c9f1c0913
label
description
ctime 2017-11-29 11:12:39.892687+00:00
mtime 2017-11-29 11:12:52.094575+00:00
computer [1] itplin
code pw.x6.2-itplin


INPUTS:

Link label PK Type


parent_calc_folder 308 RemoteData
pseudo_O 5 UpfData
parameters 309 ParameterData
settings 310 ParameterData
pseudo_Ca 8 UpfData
kpoints 311 KpointsData
pseudo_Mn 6 UpfData
structure 312 StructureData
pseudo_Re 7 UpfData

OUTPUTS:

Link label PK Type


remote_folder 314 RemoteData
retrieved 315 FolderData
output_parameters 316 ParameterData
output_array 317 ArrayData

Here is a segment of my script:
39 calc_scf = PwimmigrantCalculation(computer=computer,
40 resources=resources,
41 remote_workdir=remote_workdir,
42 input_file_name=scfin,
43 output_file_name=scfout)
44
45 calc_bands = PwimmigrantCalculation(computer=computer,
46 resources=resources,
47 remote_workdir=remote_workdir,
48 input_file_name=bandsin,
49 output_file_name=bandsout)
50
51
52 calc_scf.use_code(code)
53 calc_bands.use_code(code)
54
55 with transport as open_transport:
56 calc_scf.create_input_nodes(open_transport)
57 calc_scf.prepare_for_retrieval_and_parsing(open_transport)
58
59 calc_bands.create_input_nodes(open_transport, parent_calc_folder=calc_scf.out.remote_folder,
60 settings_dict={'also_bands':True})
61 calc_bands.prepare_for_retrieval_and_parsing(open_transport)

Thank Leonid's reply, I am looking forward to the new version.

Release version for operation with aiida-core v0.11.4

The version of aiida-core that will come after v0.11.0 will contain the improved workflow engine. With that there will be some minor backwards incompatible changes to the API that will affect the workchains in this plugin. A version v2.0 should be released that contains all the latest features that will work with aiida-core=0.11.0 before migrating to the newer version of aiida-core.

PW Parser does not create an output_structure node after 'vc-relax' with pw 6.1

I submit a vc-relax calculation, it runs correctly, but among the output dict I cannot not find a StructureData connected to the calculation via an output_structure link. Ex, in verdi shell:

In [8]: calc = load_node(5103)

In [9]: calc.inp.parameters.get_dict()['CONTROL']['calculation']
Out[9]: u'vc-relax'

In [10]: calc.get_outputs_dict()
Out[10]:
{u'output_parameters': <ParameterData: uuid: 3f57d847-1920-4699-9f02-9518764cfcbb (pk: 5171)>,
u'output_parameters_5171': <ParameterData: uuid: 3f57d847-1920-4699-9f02-9518764cfcbb (pk: 5171)>,
u'output_trajectory': <TrajectoryData: uuid: c5b7c67c-f753-4a6e-b29b-188deeb9098c (pk: 5172)>,
u'output_trajectory_5172': <TrajectoryData: uuid: c5b7c67c-f753-4a6e-b29b-188deeb9098c (pk: 5172)>,
u'remote_folder': <RemoteData: uuid: 78c90632-7421-4058-ab81-f744d87ab641 (pk: 5148)>,
u'remote_folder_5148': <RemoteData: uuid: 78c90632-7421-4058-ab81-f744d87ab641 (pk: 5148)>,
u'retrieved': <FolderData: uuid: a796779b-36f2-4fef-bac9-73c9d42a1867 (pk: 5170)>,
u'retrieved_5170': <FolderData: uuid: a796779b-36f2-4fef-bac9-73c9d42a1867 (pk: 5170)>}

System Configuration:

  • pw 6.1
  • AiiDA 0.9.0,
  • default qe plugin (i.e. the one shipped checking out tag v0.9.0 of aiida, not the one installed with pip install aiida-quantumespresso)

Group functionality in PwRelaxWorkChain sometimes adds the WorkCalculation instead of PwCalculation

The group input of the PwRelaxWorkChain allows the user to specify a Group to which the last ran PwCalculation would be added. The logic that is used to get the calculation:

calculation = workchain.out.output_parameters.inp.output_parameters

is broken. Because the PwBaseWorkChain also returns the output_parameters node, their will be two input links with the label output_parameters from the ParameterData. Since the .inp method will select a random one, sometimes the WorkCalculation node is returned and sometimes the JobCalculation. To make this deterministic one should specify the node_type argument with the get_inputs method.

PwImmigrant does not parse bands

When importing a bands-calculation, the PwImmigrant does not create a BandsData instance.
This needs to be implemented and the documentation updated.

Add max_meta_convergence_iterations to PwRelaxWorkChain

Currently the PwRelaxWorkChain will forever run PwBaseWorkChains if the meta_convergence option is enabled and the convergence threshold is not met. This should be limited by a maximum number that can be defined in the inputs.

In the meantime, we should also expose max_iterations of PwBaseWorkChain to PwRelaxWorkChain

Implement complete parsing based on new XML format and XSD schema files

To do (some of these might be deferred to a further release):

  • lattice_parameter_xml and number_of_species: take from ['input'] if not present in ['output'] -- WON'T DO unless we find cases where they are missing
  • parse exit status
  • parse output -> convergence_info (mandatory in schema 1.0, optional after that)
    • broken output? Ask Pietro
  • after calling xsd.to_dict(), check the results of 'errors' -- DONE as warnings
  • also parse bands from XML? -- Seems already done
  • fix various time_reversal flags
  • confirm that it always nbnd_up == nbnd_dw (we are indirectly checking it with an assert, but it is not directly checked)
  • update / add docstrings
  • parse Hubbard stuff (NB: mind schema changes, 'label' is now optional) -- WON'T DO: Seb&Iurii: nothing needs to be parsed from pw.x; all Hubbard-related outputs are identical to inputs.
  • maybe related: #188
  • rationalize the use of custom asserts, exceptions, and logging
  • parse new timing type (optional) -- Deferred until I can see examples
  • parse nsym and nrot, and use them as extra validation
  • Tests:
    • SCF, (vc-)relax, NSCF, BANDS, Berry phase, NEB (? it uses the xml parser)
    • with elementaries and compounds
    • no spin, spin-polarized, non-collinear spin polarized
    • try to automate the checks of equivalent output between old and new XML
    • Hubbard
    • a convoluted case where occupations = fixed and total_magnetizaion = (a small positive or negative integer): this triggers nbnd_up != nbnd_dw, and should be nbnd_up - nbnd_dw = total_magnetization.

(edited by @borellim)

Running PwBandsWorkChain with a static kpoints mesh

I had troubles running the PwBandsWorkChain providing a static mesh as input instead of using a k-mesh derived from density. the first scf will still use a k-mesh derived from density. I think the problem is at line 127 of the bands.py file ( if 'kpoints_distance' in self.inputs:) since kpoints_distance is always in self.inputs (see line 32).

Furthermore, if someone has already an optimized structure from a previous workflow it would be nice to be able to skip the first relaxation step of the PwBandsWorkChain.

Inconsistent usage of conversion factors

The plugin uses several constants during parsing, which are currently defined and taken from aiida_core, see aiida_quantumespresso.parsers.constants. The constants ry_to_ev and ry_si are inconsistent but both are used. The first is used in general, but the latter is used for the parsing of the stress. These constants will be removed from aiida_core in the near future (see issue on aiida_core) and will have to be moved here. When the parsing is updated to use the new XML schema, we should take the opportunity to also make the usage of the constants consistent

Parsing Electronic and Ionic Dipole in pw calculations with applied external electric field

When performing a pw calculation in presence of an applied external electric field, the electronic and ionic dipoles are printed in pw.out for every iteration of the scf procedure. See PW/examples/example10:

pw.in

&control
    calculation='scf'
    restart_mode='from_scratch',
    prefix='silicon',
    lelfield=.true.,
    nberrycyc=3
    pseudo_dir='$PSEUDO_DIR/',
    outdir='$TMP_DIR/'

 /
 &system
    ibrav= 1, celldm(1)=10.18, nat=  8, ntyp= 1,
    ecutwfc = 20.0
 /
 &electrons
    diagonalization='david',
    conv_thr =  1.0d-8,
    mixing_beta = 0.5,
    startingwfc='random',
    efield_cart(1)=0.d0,efield_cart(2)=0.d0,efield_cart(3)=0.001d0

pw.out

Electronic Dipole per cell (a.u.)  0.926395840102385
Ionic Dipole per cell (a.u.)   115.173552519665
Electronic Dipole on Carthesian axes
           1 -1.208125742863772E-005
           2 -4.814216075399945E-006
           3  0.926395840102385
Ionic Dipole on Carthesian axes
           1   115.173552519665
           2   115.173552519665
           3   115.173552519665

I would need these quantities to be parsed by AiiDA (and ideally to be returned also in the output_parameters of the PwBaseWorkChain)

Add support for RETRIEVALFAILED in PwBaseWorkChain

Just as for the SUBMISSIONFAILED status, a RETRIEVALFAILED status may just indicate "temporary" problems with the cluster. In the latter case, the calculation may even have completed correctly and so aborting the workchain would waste computing time. There should be some error handling mechanism that implements an exponential backing off scheme that attempts to retrieve the calculation at a later time. This is not as crucial for the SUBMISSIONFAILED case as there, there is no potentially successfully completed calculation that can be wasted.

Bug in pw base workflow

The spec expects ParameterData here but then uses it as a dictionary in these lines here ,
ParameterData does not offer the .get method.

VC-relaxation restarts

VC-Relaxations now are not handled properly, it seems. The old Workflows restart from the output structure also of a failed calculation, the new workflow system restarts from the calculation without updating the structure. Without any parameters changing, this just makes the calculation crash again and the workflow not producing any output.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.