Git Product home page Git Product logo

quantumlib / openfermion-fqe Goto Github PK

View Code? Open in Web Editor NEW
58.0 11.0 26.0 9.93 MB

The Fermionic Quantum Emulator (FQE) is a fermionic simulation research tool specializing in quantum circuits emulating fermion dynamics.

License: Apache License 2.0

Python 85.69% Shell 0.29% Makefile 0.03% C 11.14% Cython 2.85%
quantum-computing quantum-chemistry quantum-chemistry-simulation quantum-circuit-simulator

openfermion-fqe's People

Contributors

awhite862 avatar klgunst avatar kommerck avatar ktthross avatar ncrubin avatar pavoljuhas avatar rmlarose avatar shiozaki avatar throssellqsim avatar wjhuggins avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openfermion-fqe's Issues

Code duplication in hamiltonians

There's a few things that can be cleaned up with these classes, notably the methods diagonal_coulomb, diagonal, etc. Some of these can just be inferred from the type, i.e. using isinstance(ham, DiagonalCoulomb) instead of

def diagonal_coulomb(self) -> bool:
    """Returns whether or not the Hamiltonian is diagonal_coulomb."""
    return True

Alternatively, one define Hamiltonian with arguments for is_diagonal, is_diagonal_coulomb, etc., which would clean things up.

I think the inheritance structure could be fixed in some places. E.g., DiagonalCoulomb should inherit from Diagonal instead of from Hamiltonian as far as I understand.

Consider adding additional helper functions for integration with Cirq

I spent some time experimenting with FQE for pre-determining parameters for a variational ansatz that I want to express as a quantum circuit. I ended up writing code to take the parameters from FQE and build a cirq Circuit that produced the same wavefunction (with one qubit per spin-orbital and a Jordan-Wigner transformation).

I was a little surprised that I had to cobble together some of slow and messy code to convert my FQE wavefunction into a qubit one. It was important for testing that I compare the two wavefunctions properly and it would be great if FQE provided a helper for this that wasn't too slow on top of the exponential scaling we'd already be dealing with.

I'm not sure what the best path here is. It seems silly to reproduce the OpenFermion functionality for doing the conversions unless we're going to replace it entirely (which might be a good long term plan). Minimally, it would be nice to have a function that can at least take an FQE wavefunction and a qubit ordering for Jordan-Wigner and return a vector in the qubit Hilbert space.

P.S. I'm not sure if this is the best place for a feature request, so feel free to close this issue if there is a better channel.

Add format check?

In addition to type checking and linting, this can also easily be added to CI. Standard tools are yapf and black.

Is Evolution under FermionOperator Correct?

It looks like the utility that converts the FermionOperator to a SparseHamiltonian object counts the number of orbitals by the number of terms and not their index. In fqe_decorators the convert function calls build_hamiltonian which defaults the number of norb to zero.

Use either pytest or unittest, not both

Currently everything in algorithm/ uses pytest for testing while everything in fqe_ops/ uses unittest. Is there a reason for using both? AFAIK it's standard to only use one.

Inconsistency between expectation values via SparseHamiltonian and get_restricted_hamiltonian?

Suppose I have an OpenFermion InteractionOperator and I want to measure its energy on a FQE wave function.

It appears there are two routs to do this:

  1. Convert the InteractionOperator to a FermionOperator and then to a FQE SparseHamiltonian (btw, it would be nice if there was a more direct way to do this)

  2. Get the tensors from the InteractionOperator and feed them into FQE's get_restricted_hamiltonian()

Naively I would have thought that this should yield the same result, but this does not seem to be the case, not even in the HartreeFock state as the following "minimal" example shows (just to be sure I am also constructing the InteractionOperator in two different ways):

import fqe
import numpy as np
import openfermion as of

from openfermion.chem.molecular_data import spinorb_from_spatial

def integrals_to_coefficients(one_body_integrals, two_body_integrals):
    return spinorb_from_spatial(one_body_integrals, two_body_integrals/2)

nact = 2
nalpha = 1
nbeta = 1

core_energy = 0.12345134
one_body_integrals = np.array([[-1.37560394, -0.26984563],
                               [-0.26984563,  1.09725617]])
two_body_integrals = np.array([[[[-0.56763992,  0.24989089],
                                 [ 0.24989089, -0.50890225]],
                                [[ 0.24989089, -0.50890225],
                                 [ 0.45668102, -0.21692285]]],
                               [[[ 0.24989089,  0.45668102],
                                 [-0.50890225, -0.21692285]],
                                [[-0.50890225, -0.21692285],
                                 [-0.21692285, -1.23529049]]]])

# make InteractionOperator manually
one_body_coefficients, two_body_coefficients = integrals_to_coefficients(one_body_integrals, two_body_integrals)
interaction_operator1 = of.InteractionOperator(core_energy, one_body_coefficients, two_body_coefficients)

# make InteractionOperator via MolecularData
molecule = of.MolecularData(
    geometry=[['?', [0, 0, 0]]],
    basis='cc-pvdz',
    multiplicity=nalpha-nbeta+1,
)
molecule.n_orbitals = nact
molecule.n_qubits = 2 * molecule.n_orbitals
molecule.nuclear_repulsion = core_energy
molecule.one_body_integrals = one_body_integrals
molecule.two_body_integrals = two_body_integrals
interaction_operator2 = molecule.get_molecular_hamiltonian()

wfn = fqe.Wavefunction([[nalpha+nbeta, nalpha-nbeta, nact]])
wfn.set_wfn(strategy="hartree-fock")

for interaction_operator in [interaction_operator1, interaction_operator2]:
    # btw: Is there a better way to transform an InteractionOperator into a FermionOperator?
    fermion_operator = of.FermionOperator()
    for key in interaction_operator:
        fermion_operator += of.FermionOperator(key, interaction_operator[key])
    fermion_operator = fqe.hamiltonians.sparse_hamiltonian.SparseHamiltonian(fermion_operator)
    print("SparseHamiltonian energy:           ", wfn.expectationValue(fermion_operator))

    fermion_operator = fqe.get_restricted_hamiltonian(
        ([interaction_operator.one_body_tensor,
          interaction_operator.two_body_tensor]),
        e_0=interaction_operator.constant)
    print("get_restricted_hamiltonian() energy:", wfn.expectationValue(fermion_operator))

The output I get is:

SparseHamiltonian energy:            (-3.19539646+0j)
get_restricted_hamiltonian() energy: (1.26736786+0j)
SparseHamiltonian energy:            (-3.19539646+0j)
get_restricted_hamiltonian() energy: (1.26736786+0j)

Am I doing something wrong?

Broken general spin-orbital Apply

import fqe
import openfermion as of
import numpy as np
from itertools import product

wfn = fqe.Wavefunction([[2, 0, 2]])
wfn.set_wfn(strategy='random')
cirq_wf = fqe.to_cirq(wfn).reshape((-1, 1))
wfn.print_wfn()

op_to_apply = of.FermionOperator()
for p, q, r, s in product(range(2), repeat=4):
    op = of.FermionOperator(((2 * p, 1), (2 * q + 1, 1), (2 * r + 1, 0), (2 * s, 0)),
                            coefficient=np.random.randn())
    op_to_apply += op + of.hermitian_conjugated(op)

assert of.is_hermitian(op_to_apply)
print(op_to_apply)

opmat = of.get_sparse_operator(op_to_apply, n_qubits=4)
new_state_cirq = opmat @ cirq_wf
print(new_state_cirq)

# this part is because we need to pass a normalized wavefunction
norm_constant = new_state_cirq.conj().T @ new_state_cirq
new_state_cirq /= np.sqrt(norm_constant)
new_state_wfn = fqe.from_cirq(new_state_cirq.flatten(), thresh=1.0E-12)
new_state_wfn.scale(np.sqrt(norm_constant))

test_state = wfn.apply(op_to_apply)

new_state_wfn.print_wfn()

print()

test_state.print_wfn()

@shiozaki These should be the same wavefunction no?

when initializing wf with multiple sectors normalize before returning

Wavefunctions initialized over multiple sectors are nor normalized.

wfn_fqe = fqe.Wavefunction([[2, 0, 4], [2, -2, 4]], broken=None)
wfn_fqe.set_wfn(strategy="random")
wfn_fqe.print_wfn()
wfn.norm()  # this should be 1

This can be fixed by calling self.normalize() before returning from the initialization function.

Expanded docs for Hamiltonian objects

From a user perspective diagonal and diagonal_coulomb are not descriptive enough to differentiate what they should represent.

Also, It would be great to have simple examples of initializing GSO and SSO

hamiltonian_time_evolution_and_expectation_estimation.ipynb does not work

On a fresh colab with an fqe install the notebook gives the following error:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-3-0b93576fec06> in <module>()
      4 import numpy
      5 import fqe
----> 6 from fqe.unittest_data import build_lih_data, build_hamiltonian
      7 numpy.set_printoptions(floatmode='fixed', precision=6, linewidth=80, suppress=True)
      8 numpy.random.seed(seed=409)

ModuleNotFoundError: No module named 'fqe.unittest_data'

CC=mpicc incorrectly recognized as icc in setup.py

When trying to install fqe via pip install fqe==v.0.2.0 I get the following error message:

      [...]
      building 'fqe.lib.libfqe' extension
      creating build/temp.linux-x86_64-3.8
      creating build/temp.linux-x86_64-3.8/src
      creating build/temp.linux-x86_64-3.8/src/fqe
      creating build/temp.linux-x86_64-3.8/src/fqe/lib
      /opt/shared/apps/compiler/gcc/11.1.0/openmpi/4.1.1/bin/mpicc -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Isrc/fqe/lib -I/fs/home/cvjjm/.conda/envs/google/include/python3.8 -c src/fqe/lib/binom.c -o build/temp.linux-x86_64-3.8/src/fqe/lib/binom.o -O3 -xHost -fPIC -qopenmp
      gcc: warning: ‘-x Host’ after last input file has no effect
      gcc: error: unrecognized command-line option ‘-qopenmp’; did you mean ‘-fopenmp’?
      error: command '/opt/shared/apps/compiler/gcc/11.1.0/openmpi/4.1.1/bin/mpicc' failed with exit code 1
      [end of output]

Both gcc 11 and 9.4 give this.

It seems that the -qopenmpi is specific to intel compilers? Do I have to use an intel compiler to install fqe?

evolve_fqe_givens is too slow

It turns out the sparseHamiltonian evolution is about 10x faster than what is currently in algorithms

def evolve_fqe_of_givens(wfn: fqe.Wavefunction, u: np.ndarray) -> np.ndarray:
    """
    Utility for testing evolution of a full 2^{n} wavefunction

    Args:
        wfn np.ndarray: 2^{n} x 1 vector
        u: (n//2 x n//2) unitary matrix

    Returns:
        New evolved 2^{n} x 1 vector
    """
    rotations, diagonal = givens_decomposition_square(u.copy())
    # Iterate through each layer and time evolve by the appropriate
    # fermion operators
    for layer in rotations:
        for givens in layer:
            i, j, theta, phi = givens
            op = of.FermionOperator(((2 * j, 1), (2 * j, 0)), coefficient=-phi)
            wfn = wfn.time_evolve(1., op)
            op = of.FermionOperator(((2 * j + 1, 1), (2 * j + 1, 0)),
                                     coefficient=-phi)
            wfn = wfn.time_evolve(1., op)

            op = of.FermionOperator(((2 * i, 1), (2 * j, 0)),
                                    coefficient=-1j * theta) + of.FermionOperator(
                ((2 * j, 1), (2 * i, 0)), coefficient=1j * theta)
            wfn = wfn.time_evolve(1., op)
            op = of.FermionOperator(((2 * i + 1, 1), (2 * j + 1, 0)),
                                     coefficient=-1j * theta) + of.FermionOperator(
                ((2 * j + 1, 1), (2 * i + 1, 0)), coefficient=1j * theta)
            wfn = wfn.time_evolve(1., op)

    # evolve the last diagonal phases
    for idx, final_phase in enumerate(diagonal):
        if not np.isclose(final_phase, 1.0):
            op = of.FermionOperator(((2 * idx, 1), (2 * idx, 0)),
                                    -np.angle(final_phase))
            wfn = wfn.time_evolve(1., op)
            op = of.FermionOperator(((2 * idx + 1, 1), (2 * idx + 1, 0)),
                                     -np.angle(final_phase))
            wfn = wfn.time_evolve(1., op)

    return wfn

Why?

Fermi-Hubbard simulation example: Low filling is called when half filling should be

This comes from an example simulating the Fermi-Hubbard model where the time-evolved wavefunction is incorrect.

import numpy as np
from scipy.linalg import expm

import openfermion as of
import fqe

hubbard = of.fermi_hubbard(1, 4, tunneling=1.0, coulomb=2.0, periodic=False)

init_wfn = fqe.Wavefunction([[2, 0, 4]])
init_wfn.set_wfn(strategy="random")

evolved_wfn = init_wfn.time_evolve(1.0, hubbard)

# Check.
wfn_cirq = fqe.to_cirq(init_wfn)
unitary = expm(-1j * of.get_sparse_operator(hubbard))
evolved_wfn_cirq = unitary @ wfn_cirq

fidelity = abs(fqe.vdot(evolved_wfn, fqe.from_cirq(evolved_wfn_cirq, thresh=1e-12)))**2
assert np.isclose(fidelity, 1.0)  # Fails

A hack to ensure the fidelity is one is setting init_wfn.sector((2, 0))._low_thresh = 0.0 before calling init_wfn.time_evolve. This causes time_evolve to call FqeData._apply_array_spin12_halffilling which produces the correct wavefunction. Without setting the threshold to zero, FqeData._apply_array_spin12_lowfilling is called and results in an incorrect wavefunction.

How to set the working threads

I noticed that the origional code in fqe_data.c sets the maximal threads, and how to use 'os.environ' to reset the threads in the instances?

Consider a unique definition for Wavefunction

Wavefunction is currently multiply defined, once as the class and once as a getter function for the class. This caused a few issues with mypy, and while they are easily resolved by importing the correct Wavefunction, there is a chance for confusion. Perhaps refactor the function to get_wavefunction or a similar name?

Which Hamiltonian object to use

If I want to build a Hamiltonian from UHF I would do the following in pyscf + OpenFermion + FQE. This uses the build_hamiltonian function to translate the FermionOperator into an FQE Hamiltonian. A more direct way would be to populate a FQE Hamiltonian directly (a SSO Hamiltonian object?).

def get_uhf_hamiltonian(mf):
    """Return OpenFermion InteractionOperator and FQE Hamiltonian"""
    assert isinstance(mf, pyscf.scf.uhf.UHF)
    # EXTRACT Hamiltonian for UHF
    norb = mf.mo_energy[0].size
    mo_a = mf.mo_coeff[0]
    mo_b = mf.mo_coeff[1]
    h1e_a = reduce(np.dot, (mo_a.T, mf.get_hcore(), mo_a))
    h1e_b = reduce(np.dot, (mo_b.T, mf.get_hcore(), mo_b))
    g2e_aa = ao2mo.incore.general(mf._eri, (mo_a,)*4, compact=False)
    g2e_aa = g2e_aa.reshape(norb,norb,norb,norb)
    g2e_ab = ao2mo.incore.general(mf._eri, (mo_a,mo_a,mo_b,mo_b), compact=False)
    g2e_ab = g2e_ab.reshape(norb,norb,norb,norb)
    g2e_bb = ao2mo.incore.general(mf._eri, (mo_b,)*4, compact=False)
    g2e_bb = g2e_bb.reshape(norb,norb,norb,norb)
    # h1e = (h1e_a, h1e_b)
    # eri = (g2e_aa, g2e_ab, g2e_bb)
    # print(h1e[0].shape)
    # print(eri[0].shape, eri[1].shape, eri[2].shape)

    # See PQRS convention in OpenFermion.hamiltonians._molecular_data
    # h[p,q,r,s] = (ps|qr)
    g2e_aa_of = np.asarray(1. * g2e_aa.transpose(0, 2, 3, 1), order='C')
    g2e_bb_of = np.asarray(1. * g2e_bb.transpose(0, 2, 3, 1), order='C')
    g2e_ab_of = np.asarray(1. * g2e_ab.transpose(0, 2, 3, 1), order='C')

    soei, stei = spinorb_from_spatial_blocks(h1e_a, h1e_b, g2e_aa_of, g2e_bb_of,
                                             g2e_ab_of)
    astei = np.einsum('ijkl', stei) - np.einsum('ijlk', stei)
    io_uhf_ham = of.InteractionOperator(0, soei, 0.25 * astei)
    fqe_uhf_ham = fqe.build_hamiltonian(of.get_fermion_operator(io_uhf_ham),
                          norb=norb, conserve_number=True)
    return io_uhf_ham, fqe_uhf_ham

where spinorb_from_spatial_blocks builds the spin-orbital Hamiltonian from h1e_a, h1e_b, g2eaa, g2e_bb, etc...

def spinorb_from_spatial_blocks(h1a, h1b, eriaa, eribb, eriab,
                                EQ_TOLERANCE=1.0E-12):
    n_qubits = 2 * h1a.shape[0]

    # Initialize Hamiltonian coefficients.
    one_body_coefficients = np.zeros((n_qubits, n_qubits))
    two_body_coefficients = np.zeros(
        (n_qubits, n_qubits, n_qubits, n_qubits))
    # Loop through integrals.
    for p in range(n_qubits // 2):
        for q in range(n_qubits // 2):

            # Populate 1-body coefficients. Require p and q have same spin.
            one_body_coefficients[2 * p, 2 * q] = h1a[p, q]
            one_body_coefficients[2 * p + 1, 2 * q + 1] = h1b[p, q]
            # Continue looping to prepare 2-body coefficients.
            for r in range(n_qubits // 2):
                for s in range(n_qubits // 2):

                    # Mixed spin
                    two_body_coefficients[2 * p, 2 * q + 1, 2 * r + 1, 2 * s] = (eriab[p, q, r, s])
                    two_body_coefficients[2 * p + 1, 2 * q, 2 * r, 2 * s + 1] = (eriab[q, p, s, r])

                    # Same spin
                    two_body_coefficients[2 * p, 2 * q, 2 * r, 2 * s] = (eriaa[p, q, r, s])
                    two_body_coefficients[2 * p + 1, 2 * q + 1, 2 * r + 1, 2 * s + 1] = (eribb[p, q, r, s])

    # Truncate.
    one_body_coefficients[
        np.absolute(one_body_coefficients) < EQ_TOLERANCE] = 0.
    two_body_coefficients[
        np.absolute(two_body_coefficients) < EQ_TOLERANCE] = 0.

    return one_body_coefficients, two_body_coefficients

Version/release tag

Hi @ncubin and co - Now that we have bumped the version to 0.2.0, we probably want to tag 0.1.0 (and when you sign off on the release, 0.2.0) for future references?

Spinless fermionic tight binding: Time evolution yields AssertionError

Hi,

Noob here. Tried to time-evolve a TB Hamiltonian for spinless fermions, strting from a half-filled state

  import fqe 
  from openfermion import FermionOperator,hermitian_conjugated
  N=2
  M=4
  half_full = fqe.get_wavefunction(N, N, M)
  half_full.set_wfn(strategy='hartree-fock')
  half_full.print_wfn()
  
  #Periodic boundary conditions
  forward_hop = FermionOperator(((M-1,1),(0,0))) 
  for i in range(M-1):
      forward_hop += FermionOperator(((i,1),(i+1,0))) 
  
  
  tb_hamilt = forward_hop + hermitian_conjugated(forward_hop)
  fqe_ham = tb_hamilt
  
  fqe_ham = tb_hamilt
  #Evolve ballistically in time for t=10.2
  display(fqe_ham)
  psi_t = half_full.time_evolve(10.2, fqe_ham)
  psi_t.print_wfn()

Gave the following traceback:

  Sector N = 2 : S_z = 2
  a'0011'b'0000' (1+0j)
  1.0 [0^ 1] +
  1.0 [0^ 3] +
  1.0 [1^ 0] +
  1.0 [1^ 2] +
  1.0 [2^ 1] +
  1.0 [2^ 3] +
  1.0 [3^ 0] +
  1.0 [3^ 2]
  ---------------------------------------------------------------------------
  AssertionError                            Traceback (most recent call last)
  Cell In[6], line 22
       20 #Evolve ballistically in time for t=10.2
       21 display(fqe_ham)
  ---> 22 psi_t = half_full.time_evolve(10.2, fqe_ham)
       23 psi_t.print_wfn()
  
  File /usr/local/miniforge3/envs/openfermion/lib/python3.10/site-packages/fqe/fqe_decorators.py:393, in wrap_time_evolve.<locals>.convert(self, time, ops, inplace)
      383 """ Converts an FermionOperator to hamiltonian.Hamiltonian
      384 
      385 Args:
     (...)
      388     ops (FermionOperator or Hamiltonian): input operator
      389 """
      390 hamil = build_hamiltonian(ops,
      391                           norb=self.norb(),
      392                           conserve_number=self.conserve_number())
  --> 393 return time_evolve(self, time, hamil, inplace)
  
  File /usr/local/miniforge3/envs/openfermion/lib/python3.10/site-packages/fqe/wavefunction.py:1016, in Wavefunction.time_evolve(self, time, hamil, inplace)
     1013 else:
     1014     transformation = hamil.calc_diag_transform()
  -> 1016     permu, low, upp, work_wfn = work_wfn.transform(
     1017         transformation)
     1019     ci_trans = transformation @ permu
     1021     h1e = hamil.transform(ci_trans)
  
  File /usr/local/miniforge3/envs/openfermion/lib/python3.10/site-packages/fqe/wavefunction.py:926, in Wavefunction.transform(self, rotation, low, upp)
      924         civec.apply_columns_recursive_inplace(output, output)
      925 elif rotation.shape[0] == norb * 2:
  --> 926     assert numpy.std(rotation[:norb, norb:]) \
      927            + numpy.std(rotation[norb:, :norb]) < 1.0e-8
      928     if not external:
      929         perm1, low1, upp1 = ludecomp(rotation[:norb, :norb])
  
  AssertionError: 

Update: Ballistic evolution works fine:

  from openfermion import FermionOperator
  import fqe 
  N=3
  M=6
  
  N_op = FermionOperator('0^ 0')
  for i in range(1, M):
      N_op += FermionOperator(((i,1),(i,0)))
  
  
  wfn_fqe = fqe.get_wavefunction(N, N, M)
  wfn_fqe.set_wfn(strategy='hartree-fock')

  #N_op+= FermionOperator('0^ 1') + FermionOperator('1^ 0')

  psi_t = wfn_fqe.time_evolve(10.2, N_op)
  psi_t.print_wfn()

Yields

  Sector N = 3 : S_z = 3
  a'000111'b'000000' (0.6851938352639844+0.7283607678315959j)

Uncomment the hopping term, and the AssertionError is raised again.

Code coverage dropped by 1%

In PR #128 where I fixed the Taylor expansion limit and max_expansion limit logic I introduced a logic path but did not explicitly test this path. Code coverage for the wavefunction.py module thus dropped 1%

Name                                                   Stmts   Miss  Cover   Missing
------------------------------------------------------------------------------------
src/fqe/__init__.py                                        3      0   100%
src/fqe/_fqe_control.py                                  146      0   100%
src/fqe/_version.py                                        2      0   100%
src/fqe/bitstring.py                                      55      0   100%
src/fqe/cirq_utils.py                                     40      0   100%
src/fqe/fci_graph.py                                     230      0   100%
src/fqe/fci_graph_set.py                                 105      0   100%
src/fqe/fqe_data.py                                     1472      0   100%
src/fqe/fqe_data_set.py                                  482      0   100%
src/fqe/fqe_decorators.py                                190      0   100%
src/fqe/fqe_ops/__init__.py                                1      0   100%
src/fqe/fqe_ops/fqe_operator.py                           13      0   100%
src/fqe/fqe_ops/fqe_ops.py                                60      0   100%
src/fqe/fqe_ops/fqe_ops_utils.py                          50      0   100%
src/fqe/hamiltonians/__init__.py                           1      0   100%
src/fqe/hamiltonians/diagonal_coulomb.py                  38      0   100%
src/fqe/hamiltonians/diagonal_hamiltonian.py              30      0   100%
src/fqe/hamiltonians/general_hamiltonian.py               48      0   100%
src/fqe/hamiltonians/gso_hamiltonian.py                   48      0   100%
src/fqe/hamiltonians/hamiltonian.py                       34      0   100%
src/fqe/hamiltonians/hamiltonian_utils.py                 90      0   100%
src/fqe/hamiltonians/restricted_hamiltonian.py            48      0   100%
src/fqe/hamiltonians/sparse_hamiltonian.py                74      0   100%
src/fqe/hamiltonians/sso_hamiltonian.py                   54      0   100%
src/fqe/lib/__init__.py                                   10      1    90%   14
src/fqe/lib/bitstring.py                                  19      0   100%
src/fqe/lib/cirq_utils.py                                 11      0   100%
src/fqe/lib/fci_graph.py                                  63      1    98%   128
src/fqe/lib/linalg.py                                     21      0   100%
src/fqe/lib/wick.py                                       24      0   100%
src/fqe/openfermion_utils.py                             146      0   100%
src/fqe/settings.py                                        9      0   100%
src/fqe/tensor/__init__.py                                 2      0   100%
src/fqe/tensor/tensor_utils.py                            61      0   100%
src/fqe/transform.py                                      42      0   100%
src/fqe/util.py                                          227      0   100%
src/fqe/wavefunction.py                                  623      2    99%   566, 598
src/fqe/wick.py                                          341      0   100%
tests/__init__.py                                          0      0   100%

Lines 566 and 598 are the additional for-else logic I added. A small test that builds a Hamiltonian and then sets max_expansion 1 one (so that Taylor and Chebyshev) won't converge is needed to test the Runtime error that should be raised.

Docstring improvement on inplace=True

This keyword is only suggestive, in that it does not enforce the behavior (because enforcing that the result is in the same place requires additional data copy in some cases). For that reason, the return value needs to be set.

wfn = wfn.time_evolve(..., inplace=True)

This has to be explicitly written in the docstring.

Another way to put it is that inplace=False is enforcing the behavior that the copy is created and the original is unchanged. It's defaulting to inplace=False.

Cirq Diagonal_coulomb is faster than FQE

The following cirq code

def evolve_cirq_diagonal_coulomb(initial_wf: np.ndarray, vij):
    circuit = cirq.Circuit()
    qubits = cirq.LineQubit.range(2 * vij.shape[0])
    for i, j in product(range(norbs), repeat=2):
        for sigma, tau in product(range(2), repeat=2):
            if 2 * i + sigma == 2 * j + tau:
                circuit += cirq.ZPowGate(
                    exponent=-vij[i, i] / np.pi).on(qubits[2 * i + sigma])
            else:
                circuit += cirq.CZPowGate(
                    exponent=-vij[i, j] / np.pi).on(qubits[2 * i + sigma],
                                                    qubits[2 * j + tau])

    final_state = circuit.final_wavefunction(initial_state=initial_wf.flatten())
    return final_state

is equivalent to

diagonal_coulomb = fqe.diagonal_coulomb.DiagonalCoulomb(vij)
final_wfn = initial_wfn.time_evolve(1., diagonal_coulomb)

Using the same profiling code from before

    import cProfile
    import numpy as np
    import cirq
    from openfermion.circuits.primitives import optimal_givens_decomposition
    from openfermion import random_quadratic_hamiltonian
    import openfermion as of
    from scipy.linalg import expm
    import fqe
    from fqe.algorithm.low_rank import evolve_fqe_givens
    import time
    norbs = 12
    sz = 0
    nelec = norbs
    start_time = time.time()
    initial_wfn = fqe.Wavefunction([[nelec, sz, norbs]])
    print("Wavefunction Initialization ", time.time() - start_time)
    graph = initial_wfn.sector((nelec, sz)).get_fcigraph()
    hf_wf = np.zeros((graph.lena(), graph.lenb()), dtype=np.complex128)
    hf_wf[0, 0] = 1
    start_time = time.time()
    cirq_wf = of.jw_hartree_fock_state(nelec, 2 * norbs)
    print("Cirq wf initialization time ", time.time() - start_time)
    initial_wfn.set_wfn(strategy='from_data',
                        raw_data={(nelec, sz): hf_wf})

    # set up Hamiltonian
    ikappa = random_quadratic_hamiltonian(norbs, conserves_particle_number=True,
                                          real=True, expand_spin=False, seed=5)
    ikappa_matrix = ikappa.n_body_tensors[1, 0]
    diagonal_coulomb = fqe.diagonal_coulomb.DiagonalCoulomb(ikappa_matrix)

The lines

    cProfile.run('initial_wfn.time_evolve(1., diagonal_coulomb, inplace=True)', 'fqe_givens_profile')
    # cProfile.run('evolve_cirq_diagonal_coulomb(cirq_wf, ikappa_matrix)', 'fqe_givens_profile')

    import pstats
    profile = pstats.Stats('fqe_givens_profile')
    profile.sort_stats('cumtime')
    profile.print_stats(30)

will do a deterministic function call profiling. FQE runs in ~30 seconds while cirq runs in ~12 seconds. If we time as a function of orbitals we get the following graph.

diag_coulob_run_times

On a log scale the difference doesn't seem large but for 14 orbitals we are at 650 seconds vs 250 seconds. I find it surprising that FQE is a factor of 3 slower than Cirq despite having quite a bit less data to work on. The diagonal_coulomb is pure python. Vectorizing with meshgrid was an even higher run time because the meshgrid construction takes more time than the python for loops with the += operation. Line profiling returns

Timer unit: 1e-06 s

Total time: 142.356 s
File: /Users/nickrubin/opt/OpenFermion-FQE/src/fqe/fqe_data.py
Function: diagonal_coulomb at line 166

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
   166                                               @profile
   167                                               def diagonal_coulomb(self,
   168                                                                    diag: 'Nparray',
   169                                                                    array: 'Nparray',
   170                                                                    inplace: bool = False) -> 'Nparray':
   171                                                   """Iterate over each element and return the scaled wavefunction.
   172                                                   """
   173         1          5.0      5.0      0.0          if inplace:
   174         1          1.0      1.0      0.0              data = self.coeff
   175                                                   else:
   176                                                       data = numpy.copy(self.coeff)
   177                                           
   178                                                   # position of orbital occupation in each bitstring
   179         1          1.0      1.0      0.0          beta_occ = []
   180       925        365.0      0.4      0.0          for bet_cnf in range(self.lenb()):
   181                                                       # print(np.binary_repr(self._core.string_beta(bet_cnf), width=self._core.norb()))
   182       924       7422.0      8.0      0.0              beta_occ.append(integer_index(self._core.string_beta(bet_cnf)))
   183                                           
   184                                                   # print("Positions of occupations in betastrings")
   185                                                   # print(beta_occ)
   186                                           
   187       925        415.0      0.4      0.0          for alp_cnf in range(self.lena()):
   188       924      16821.0     18.2      0.0              alpha_occ = integer_index(self._core.string_alpha(alp_cnf))
   189    854700     359370.0      0.4      0.3              for bet_cnf in range(self._core.lenb()):
   190                                                           # print("alpha|beta", end='\t')
   191                                                           # print(np.binary_repr(self._core.string_alpha(alp_cnf),
   192                                                           #                      width=self._core.norb()), end='\t')
   193                                                           # print(np.binary_repr(self._core.string_beta(bet_cnf),
   194                                                           #                      width=self._core.norb()), end='\t')
   195    853776     537696.0      0.6      0.4                  occ = alpha_occ + beta_occ[bet_cnf]
   196                                                           # print("occ ", occ)
   197    853776     379655.0      0.4      0.3                  diag_ele = 0.0
   198  11099088    3956021.0      0.4      2.8                  for ind in occ:
   199                                                               # sum_{i, sigma}n_{i, \sigma} where sigma in [alpha, beta]
   200  10245312    6573655.0      0.6      4.6                      diag_ele += diag[ind]
   201 133189056   49106487.0      0.4     34.5                      for jnd in occ:
   202                                                                   # \sum_{i,j, sigma, tau} n_{i, sigma}n_{j,tau}
   203                                                                   # sigma, tau in [alpha, beta]
   204 122943744   78950999.0      0.6     55.5                          diag_ele += array[ind, jnd]
   205                                           
   206    853776    2467503.0      2.9      1.7                  data[alp_cnf, bet_cnf] *= numpy.exp(diag_ele)
   207                                           
   208         1          1.0      1.0      0.0          return data

I also ran this making sure to cast the array variable to an array of floats just to check that we weren't spending overhead on type conversion (we aren't).

Thoughts @shiozaki?

deepcopy is killing performance

    import cProfile
    import numpy as np
    import cirq
    from openfermion.circuits.primitives import optimal_givens_decomposition
    from openfermion import givens_decomposition_square
    from openfermion import random_quadratic_hamiltonian
    import openfermion as of
    from scipy.linalg import expm
    import fqe
    from fqe.algorithm.low_rank import evolve_fqe_givens
    import time
    norbs = 10
    sz = 0
    nelec = norbs
    start_time = time.time()
    initial_wfn = fqe.Wavefunction([[nelec, sz, norbs]])
    print("Wavefunction Initialization ", time.time() - start_time)
    graph = initial_wfn.sector((nelec, sz)).get_fcigraph()
    hf_wf = np.zeros((graph.lena(), graph.lenb()), dtype=np.complex128)
    hf_wf[0, 0] = 1
    start_time = time.time()
    cirq_wf = of.jw_hartree_fock_state(nelec, 2 * norbs)
    print("Cirq wf initialization time ", time.time() - start_time)
    initial_wfn.set_wfn(strategy='from_data',
                        raw_data={(nelec, sz): hf_wf})

    # set up Hamiltonian
    ikappa = random_quadratic_hamiltonian(norbs, conserves_particle_number=True,
                                          real=True, expand_spin=False, seed=5)
    ikappa_matrix = ikappa.n_body_tensors[1, 0]

    # Evolution time and unitaries
    dt = 0.275
    u = expm(-1j * dt * ikappa_matrix)
    fqe_ham = fqe.restricted_hamiltonian.RestrictedHamiltonian(
        (ikappa_matrix * dt,))

    cProfile.run('initial_wfn.time_evolve(1., fqe_ham)', 'fqe_givens_profile')

    import pstats
    profile = pstats.Stats('fqe_givens_profile')
    profile.sort_stats('cumtime')
    #
    # profile.print_stats()
    profile.print_stats('deepcopy')

Output:

Wavefunction Initialization  0.07503366470336914
Cirq wf initialization time  0.003200054168701172
Sun Aug 16 18:48:22 2020    fqe_givens_profile

         30394280 function calls (25113404 primitive calls) in 12.598 seconds

   Ordered by: cumulative time
   List reduced from 172 to 6 due to restriction <'deepcopy'>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
5279822/86    3.769    0.000    8.435    0.098 /Users/nickrubin/anaconda3/envs/fqe-dev/lib/python3.6/copy.py:132(deepcopy)
   924/84    0.033    0.000    8.432    0.100 /Users/nickrubin/anaconda3/envs/fqe-dev/lib/python3.6/copy.py:236(_deepcopy_dict)
    16968    0.378    0.000    8.163    0.000 /Users/nickrubin/anaconda3/envs/fqe-dev/lib/python3.6/copy.py:210(_deepcopy_list)
  1286964    1.377    0.000    6.299    0.000 /Users/nickrubin/anaconda3/envs/fqe-dev/lib/python3.6/copy.py:219(_deepcopy_tuple)
  3974628    0.293    0.000    0.293    0.000 /Users/nickrubin/anaconda3/envs/fqe-dev/lib/python3.6/copy.py:190(_deepcopy_atomic)
       86    0.012    0.000    0.012    0.000 {method '__deepcopy__' of 'numpy.ndarray' objects}

Everywhere in the code copy.deepcopy is used to get a new wavefunction. Interestingly, __deepcopy__ of numpy.ndarray` is a tiny fraction of this time (am I reading the cProfiler right here?). This tells me we should invest effort into removing all the copy.deepcopy everywhere or implement deepcopy for FqeData and FqeGraph in a smarter way. Maybe we can just copy the coeff data and use that?

Anyways, I'm curious your thoughts @shiozaki about using shallow copy on fqeGraph and everything in FQEData except the coeffs. My concern is that the garbage collector wouldn't delete old wavefunctions and thus we would just memory leak crash. Really we shouldn't need to keep having to have the garbage collector come and blow away the state (which I think happens because a new copy is being returned each call of time_evolve. Even the "inplace" routines (apply_cosine_sine) make copies multiple copies of the data. We should be able to just copy the coefficients as needed and leave the graph and everything else alone.

from_cirq for non-singlets

The following code is used to test the conversion for a doublet state. It fails to generate an FQE wavefunction with the correct symmetry or any amplitudes.

import openfermion as of
import numpy as np
from functools import reduce
import fqe


def lowdin_projector(op, target_eig, other_eigs):
    identity = np.eye(op.shape[0])
    projector_components = [(op - identity * oe) / (target_eig - oe) for oe in other_eigs]
    return reduce(np.dot, projector_components)


if __name__ == "__main__":
    sites = 5
    na, nb = 3, 2
    U = 4
    hubbard = of.hamiltonians.fermi_hubbard(1, sites, tunneling=1, coulomb=U,
                                            chemical_potential=0,
                                            magnetic_field=0,
                                            periodic=True,
                                            spinless=False)

    # get operators on the full Fock space
    hamiltonian = of.get_interaction_operator(hubbard)
    h_mat = of.get_sparse_operator(hamiltonian).toarray().real
    s2 = of.get_sparse_operator(of.s_squared_operator(sites))
    sz = of.get_sparse_operator(of.sz_operator(sites))
    num_op = of.get_sparse_operator(of.number_operator(2 * sites))
    # get all possible quantum numbers supported on the Fock space
    particle_num_vals = list(range(2 * sites + 1))
    del particle_num_vals[particle_num_vals.index(sites)]
    print(particle_num_vals)  # should be all particles except 5
    sz_num_vals = list(range(-sites, sites + 1, 1))
    del sz_num_vals[sz_num_vals.index(1)]
    sz_num_vals = [xx / 2 for xx in sz_num_vals]
    print(sz_num_vals) # should be all Sz except 0.5
    s_s2_vals = np.arange(sites + 1) / 2
    s2_num_vals = [s * (s + 1) for s in s_s2_vals]
    del s2_num_vals[s2_num_vals.index(0.75)]
    print(s2_num_vals)  # should [0, 2, 3.75, 6, 8.75]

    # get projectors with the Lowdin projection formula..
    # aka lagrange interpolation
    proj_n = lowdin_projector(num_op, sites, particle_num_vals)
    proj_sz = lowdin_projector(sz, 0.5, sz_num_vals)
    proj_s2 = lowdin_projector(s2, 0.75, s2_num_vals)

    # get operators for comparing local expectation value
    # and RDM elements
    local_na = [of.number_operator(2 * sites, 2 * x) for x in range(sites)]
    local_nb = [of.number_operator(2 * sites, 2 * x + 1) for x in range(sites)]
    na_site = [of.get_sparse_operator(op, n_qubits=2 * sites) for op in local_na]
    nb_site = [of.get_sparse_operator(op, n_qubits=2 * sites) for op in local_nb]

    # solve the Schrodinger equation
    # projected into the sector I care about Sz = 0.5, S = 0.5 <S^2> = 0.75
    w_full, v_full = np.linalg.eigh(h_mat)
    w_n, v_n = np.linalg.eigh(proj_s2 @ proj_sz @ proj_n @ h_mat @ proj_n @ proj_sz @ proj_sz)

    print("N exp ", v_n[:, [0]].conj().T @ num_op @ v_n[:, [0]])
    print("Sz exp ", v_n[:, [0]].conj().T @ sz @ v_n[:, [0]])
    print("Sz exp ", v_n[:, [0]].conj().T @ s2 @ v_n[:, [0]])
    wfn = fqe.from_cirq(np.array(v_n[:, [0]]).flatten(), 1.0E-12)
    wfn.print_wfn()  # print's empty

The FQE.Wavefunction thinks this sector is Sz=-1 which is opposite of OpenFermion. It also doesn't seem to think there are any coefficients in that sector.

Pip install failing

Currently, pip install fqe fails with

ERROR: openfermion 1.0.1 has requirement cirq==0.10.0, but you'll have cirq 0.11.0 which is incompatible.

I noticed requirements.txt lists cirq~=0.10 causing Cirq 0.11 to get installed, but this clashes with the OpenFermion requirement of Cirq 0.10.

Can be fixed by a new OpenFermion release or a patch FQE release where requirements.txt is updated to cirq~=0.10.0. xref #91

Nick to add these usability features

  1. Hartree-Fock wavefunctions as an easy initial setting for Wavefunction. See Davidson example for allocating the appropriate size matrix based on the FciGraph.
  2. Function to take me from molecular data to RestrictedHamiltonian.
    from MolecularData integrals
oei, tei = molecule.get_integrals()
 elec_hamil = fqe.restricted_hamiltonian.RestrictedHamiltonian((oei, np.einsum('ijlk', -0.5 * tei)))

Inverse is np.einsum('ijlk', -2 * tei) going from FQE -> MolecularData.
3. Commutator expectation. This just seems like a nice to have. 2-body, 2-body commutator can be acquired from 3-RDM so maybe just have 3-RDM functionality?
4. Direct translation to cirq_wavefunction. It's okay if alpha, beta ordering is preserved. This would just be unrolling the FCI matrix and then placing into the appropriate cirq wavefunction.
5. Non-spin summed 3-RDM calculation. Say if one it to benchmark ACSE or CT without 3-RDM reconstruction...
6. Getting started tutorial.
7. Overlap between two wavefunctions. I think this exists somewhere already? (Update: this exists as vdot. I'll expose through wavefunction).

@Wuggins feel free to add to this list as things come up.

pyscf to fqe wf

Leaving as a placeholder for when I have time to add commits and tests.

from pyscf.fci.cistring import make_strings

def pyscf_to_fqe_wf(pyscf_cimat, pyscf_mf=None, norbs=None, nelec=None):
    if pyscf_mf is None:
        assert norbs is not None
        assert nelec is not None
    else:
        mol = pyscf_mf.mol
        nelec = mol.nelec
        norbs = pyscf_mf.mo_coeff.shape[1]

    norb_list = tuple(list(range(norbs)))
    n_alpha_strings = [x for x in make_strings(norb_list, nelec[0])]
    n_beta_strings = [x for x in make_strings(norb_list, nelec[1])]

    fqe_wf_ci = fqe.Wavefunction([[sum(nelec), nelec[0] - nelec[1], norbs]])
    fqe_data_ci = fqe_wf_ci.sector((sum(nelec), nelec[0] - nelec[1]))
    fqe_graph_ci = fqe_data_ci.get_fcigraph()
    fqe_orderd_coeff = np.zeros((fqe_graph_ci.lena(), fqe_graph_ci.lenb()))
    for paidx, pyscf_alpha_idx in enumerate(n_alpha_strings):
        for pbidx, pyscf_beta_idx in enumerate(n_beta_strings):
            fqe_orderd_coeff[fqe_graph_ci.index_alpha(
                pyscf_alpha_idx), fqe_graph_ci.index_beta(pyscf_beta_idx)] = \
                pyscf_cimat[paidx, pbidx]

    fqe_data_ci.coeff = fqe_orderd_coeff
    return fqe_wf_ci

This function will take a civector from pyscf FCI or CAS.ci and convert it to a fqe-wavefunction. This is very useful for getting cas wavefunctions. At some point i'd like CISD initialization as well.

Hubbard U evolution

Given the sparsity of lattice Hamiltonians should we have custom evolution routines? Or does DiagonalCoulomb cover enough of these uses?

Upgrade to cirq ~=0.13

Hi,

I have a couple of projects where I'm using FQE and also need to use cirq >=0.13. Would it be possible to release a new version of FQE that supports that?

Thanks!

Problems with FQE installation

Hello, I'm an graduated student in South Korea who is trying to do quantum simulations.
I've tried to install FQE in my workspace computer with Python 3.10.11,
but i have troubleshooting with these messages:
error: command 'C:\\Program Files\\Microsoft Visual studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for fqe
Failed to build fqe
ERROR: Could not build wheels for fqe, which is required to install pyproject.toml-based projects

so I tried this again manually using "fqe-0.3.0-cp38-cp38-macosx_11_0_arm64.whl" file.
but the error messages appear like:
ERROR: fqe-0.3.0-cp38-cp38-macosx_11_0_arm64.whl is not a supported wheel on this platform.

Since I've just started programming, I want to know what problem i've got.

get_general_hamiltonian doesn't recognize a quadratic Hamiltonian in the same way that get_restricted_hamiltonian does

I expected that get_general_hamiltonian and get_restricted_hamiltonian would both automatically recognize a quadratic Hamiltonian but it appears that this isn't currently the case:

  import fqe
  import numpy as np

  one_body_cluster_op = np.zeros((2, 2), dtype=np.complex128)

  one_body_cluster_op[1, 0] = -1
  one_body_cluster_op[0, 1] = 1

  one_body_ham = fqe.get_general_hamiltonian((-1j * one_body_cluster_op,))
  print("General Hamiltonian: ")
  print(one_body_ham.quadratic())

  one_body_restricted_ham = fqe.get_restricted_hamiltonian((-1j * one_body_cluster_op,))
  print("Restricted Hamiltonian: ")
  print(one_body_restricted_ham.quadratic())
General Hamiltonian: 
False
Restricted Hamiltonian: 
True

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.