Git Product home page Git Product logo

qhbm-library's Introduction

QHBM Library

This repository is a collection of tools for building and training Quantum Hamiltonian-Based Models. These tools depend on TensorFlow Quantum, and are thus compatible with both real and simulated quantum computers.

Installation instructions and contribution instructions can be found in the docs folder.

This is not an officially supported Google product.

qhbm-library's People

Contributors

ecsbeats avatar farice avatar jaeyoo avatar lockwo avatar sahilpatelsp avatar thubregtsen avatar zaqqwerty avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

qhbm-library's Issues

tf.function

We would like to use tf.function for performance reasons. The first question to ask is whether we want all library functions to be tf.functions or leave it up to the user to define their own tf.functions. If we decide on the former and further wish to support non-tensor inputs, then a proposed solution would be to have the exposed library functions convert all arguments to tensors and then pass these tensor arguments to a inner tf.function that performs the core functionality of the exposed function. This solution would avoid unnecessary retracing as all inputs to the inner tf.function are tensors.

Expectation of a modular Hamiltonian against a QHBM

The logarithm of a QHBM can be treated as a Hamiltonian. Call a Hamiltonian represented in this way a "modular Hamiltonian". We should be able to take the expectation value of a modular Hamiltonian against a QHBM.

Add nightly build

Having a nightly build would make research easier - features could be used as soon as they are implemented.

Add documentation

We need a process to collect the docstrings into auto-generated Markdown files

Combine QMHL and VQT tutorials

After making an example for QVR, it will be more clear what to highlight in the merged examples. The reason to merge them is to eliminate redundancy in the QHBM model exposition section.

Refactor EBM module to support TFP distributions

In the QHBM Library design doc, @geoffreyroeder described a refactor of the EBM module. The refactor has two main goals: unbundle model specification and model inference; have our probability distributions inherit from tfp.Distribution to allow routines from tfp to be applied directly to QHBMs.

Abstract the quantum circuit

Related to #33. The idea is that it makes more sense to encapsulate the specifics of quantum circuits and simulations into their own class, rather than forcing functions like the VQT and QMHL losses to deal with circuit parameters explicitly.

Profiling

In the past we have had performance bottlenecks coming from retracing and slow autographed loops. I've used the TensorBoard profiler in the past for improving performance, would be good to build a standard workflow for profiling the library with it.

Custom gradient and tf function

Removing the tf.function decorator from the VQT loss function causes it to break; similarly, adding the decorator to the QMHL loss causes the error:

TypeError: @tf.custom_gradient grad_fn must accept keyword argument 'variables', since function uses variables

This error does not make sense to me, since all variables used seem to be inputs to the forward pass function (this may be a good first assumption to verify, are the input variables not what we think they are?). Adding the variables=None keyword to the gradient function changes the error to

ValueError: None values not supported.

I would be more comfortable with our solutions if adding or removing tf.function did not change the correctness.

Tighten test tolerances

Some tests have tolerances that are fairly loose - for example #84 has a 3% relative tolerance. The tolerance can be tightened if we increase the number of samples used for the calculation, as expected. But, currently 1e6 samples are required to get to this 3% tolerance, and 1e7 to get 1%, which seems excessive. Resolving this issue would involve finding any sources of numerical imprecision that we can better control to decrease the number of samples required to enforce a 1% relative tolerance everywhere.

Replace single qubit tutorial training with larger example

Remove the single qubit training examples since they (intentionally) did not use the correct ansatz. Instead, we can transition at that point in the tutorial to applying the tools learned to a realistic problem like an Ising spin chain.

Enable differentiation through hypernetworks

The keras example on hypernetworks over-writes a tf.Variables with a tensor output of a hypernetwork, then differentiates it. Code copied from that example:

  with tf.GradientTape() as tape:
      # Predict weights for the outer model.
      weights_pred = hypernetwork(x)

     # model stuff happens 
     ...

      # Set the weight predictions as the weight variables on the outer model.
      main_network.layers[0].kernel = w0
      main_network.layers[0].bias = b0
      main_network.layers[1].kernel = w1
      main_network.layers[1].bias = b1

      # Inference on the outer model.
      preds = main_network(x)
      loss = loss_fn(y, preds)
  grads = tape.gradient(loss, hypernetwork.trainable_weights)

We similarly need a way to over-write the thetas and phis of a QHBM and still differentiate with respect to the hypernetwork parameters.

See related issue #86

Abstract the energy function

As discussed with @sahilpatelsp, there have been some difficulties with the separation between the energy function and it's parameters. We should combine these into a single class.

Improve bitstring initialization efficiency

When sampling state circuits, it would be more efficient to replace symbols in a tiling of bitstring gates before concatenating with the model, than it is to scan for the bit symbols in an already concatenated circuit.

Update docstrings

Some docstrings are incomplete, missing arguments or returns; some are absent; and all are not 4 space indented to align with new format standard.

ebm features

We might want to consider adding the following features into the ebm module at some point:

  • Transformer EnergyFunction
  • KOBE with connectivity graph
  • Gibbs EnergySampler

multiple independent mcmc chains

The standard tfp implementation of MCMC methods involves the option of having multiple independent Markov Chains through additional batch dimensions. However, the MCMC methods that are currently implemented in the library do not allow for additional batch dimensions as the energy function only takes in a single bitstring and returns a single scalar value.

Add examples

After #5 , create colab examples for both vqt and qmhl on TFIM, the examples from the paper

Localize samplers to QHBMs

Rather than leaving the backend up to the VQT and QMHL loss functions, we should have the QHBM initialized with the desired backend.

handling keyword arguments

There are a lot of different keyword arguments for other classes that we inherit from or functions that we call, and right now we only use keyword arguments that seem necessary for our purpose at the moment. However, I'm not sure if we would like to consider the full generality of handling all possible keyword arguments.

Update imports

Right now, in a colab, after importing qhbmlib, I cannot call qhbmlib.hamiltonian. Need to update the __init__.py file so importing qhbmlib makes qhbmlib.hamiltonian available.

gradient of energy function wrt input bitstring

The Gibbs with Gradients proposal function involves computing the gradient of the energy function with respect to the input bitstring. If we wish to utilize automatic differentiation to compute this gradient, then the input bitstring must be have a floating point type and all operations should involve tensors with floating point type. However, the current library implementation has the bitstrings to be of type bool, and some operations for computing the energy function involve explicitly casting tensors to type int.

reorganize into subpackages

It might be worth considering reorganizing the library into collective subpackages rather than modules consisting of single files, which is the current approach. One such organization scheme could be the following:

qhbmlib/
  __init__.py

  ebm/
        __init__.py
        ebm.py
        energy_functions/
            kobe.py
            mlp.py
            ...
        energy_samplers/
            energy_sampler.py
            mcmc/
                mcmc.py
                energy_kernel.py
                mh.py
                gwg.py
                ...

    qnn/
        __init__.py
        qnn.py
        pqc/
            hea.py
            qcnn.py
            ...

    qhbm/
        __init__.py
        qhbm.py

    losses/
        __init__.py
        vqt.py
        qmhl.py
        ...

    metrics/
        __init__.py
        fidelity.py
        ...

    datasets/
        __init__.py
        hamiltonian.py
        ...

    utils/
        __init__.py
        utils.py

Internal `setup.py` generation

@jaeyoo mentioned in the discussion here that internal generation based on the pyproject.toml is possible. This issue is to add this capability, or to determine that it is not feasible or not desirable.

Refresh architectures module

Since the update to the QNN input signature #83 , there is no longer a need for architecture functions to return their list of used symbols. All the functions should be updated accordingly, and deprecated functions removed.

Automate InferenceLayer infer calls

We gain the ability to memoize distribution construction in #111. However, now we need to manually call infer whenever we update the parameters of the underlying energy model. Instead, could we set up a structure where the InferenceLayer knows when the model variables have been changed, and automatically calls infer when another method is called?

Boost test coverage

We want to be above 90% coverage on every module before launch (with meaningful tests!). Here is a list of modules and their current mainline coverage:

Name                       Stmts   Miss Branch BrPart  Cover
----------------------------------------------------------------------
qhbmlib/architectures.py     258    150     92      0    40%
qhbmlib/ebm.py               358    265     38      1    24%
qhbmlib/hamiltonian.py        68     18     42      0    73%
qhbmlib/qhbm_base.py         149     74     12      1    51% 
qhbmlib/qmhl.py               38     28      0      0    26%
qhbmlib/qnn.py                82      4     34      0    97%
qhbmlib/util.py              124     86      0      0    31%
qhbmlib/vqt.py                83     66      0      0    20%
----------------------------------------------------------------------
TOTAL                       1161    691    218      2    42%
  • architectures.py
  • ebm.py
  • hamiltonian.py
  • qhbm_base.py
  • qmhl.py
  • qnn.py
  • util.py
  • vqt.py

Jacobians and expectations

In PR #70, I added a test that calls the jacobian method of a tf.GradientTape on the result of a TFQ calculation. The test passes if I do not use the @tf.function decorator. However, when I add the decorator, I get the following error at the end of a long chain:

LookupError: No gradient defined for operation 'TfqAdjointGradient' (op type: TfqAdjointGradient)

This is strange, because in the test function, there is only one gradient tape, so no gradient of 'TfqAdjointGradient' should be attempted.

I have been trying to recreate the error with a simplified scenario in this colab notebook, but so far I cannot replicate it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.