Git Product home page Git Product logo

tensornetwork's Introduction

Build Status

A tensor network wrapper for TensorFlow, JAX, PyTorch, and Numpy.

For an overview of tensor networks please see the following:

More information can be found in our TensorNetwork papers:

Installation

pip3 install tensornetwork

Documentation

For details about the TensorNetwork API, see the reference documentation.

Tutorials

Basic API tutorial

Tensor Networks inside Neural Networks using Keras

Basic Example

Here, we build a simple 2 node contraction.

import numpy as np
import tensornetwork as tn

# Create the nodes
a = tn.Node(np.ones((10,))) 
b = tn.Node(np.ones((10,)))
edge = a[0] ^ b[0] # Equal to tn.connect(a[0], b[0])
final_node = tn.contract(edge)
print(final_node.tensor) # Should print 10.0

Optimized Contractions.

Usually, it is more computationally effective to flatten parallel edges before contracting them in order to avoid trace edges. We have contract_between and contract_parallel that do this automatically for your convenience.

# Contract all of the edges between a and b
# and create a new node `c`.
c = tn.contract_between(a, b)
# This is the same as above, but much shorter.
c = a @ b

# Contract all of edges that are parallel to edge 
# (parallel means connected to the same nodes).
c = tn.contract_parallel(edge)

Split Node

You can split a node by doing a singular value decomposition.

# This will return two nodes and a tensor of the truncation error.
# The two nodes are the unitary matrices multiplied by the square root of the
# singular values.
# The `left_edges` are the edges that will end up on the `u_s` node, and `right_edges`
# will be on the `vh_s` node.
u_s, vh_s, trun_error = tn.split_node(node, left_edges, right_edges)
# If you want the singular values in it's own node, you can use `split_node_full_svd`.
u, s, vh, trun_error = tn.split_node_full_svd(node, left_edges, right_edges)

Node and Edge names.

You can optionally name your nodes/edges. This can be useful for debugging, as all error messages will print the name of the broken edge/node.

node = tn.Node(np.eye(2), name="Identity Matrix")
print("Name of node: {}".format(node.name))
edge = tn.connect(node[0], node[1], name="Trace Edge")
print("Name of the edge: {}".format(edge.name))
# Adding name to a contraction will add the name to the new edge created.
final_result = tn.contract(edge, name="Trace Of Identity")
print("Name of new node after contraction: {}".format(final_result.name))

Named axes.

To make remembering what an axis does easier, you can optionally name a node's axes.

a = tn.Node(np.zeros((2, 2)), axis_names=["alpha", "beta"])
edge = a["beta"] ^ a["alpha"]

Edge reordering.

To assert that your result's axes are in the correct order, you can reorder a node at any time during computation.

a = tn.Node(np.zeros((1, 2, 3)))
e1 = a[0]
e2 = a[1]
e3 = a[2]
a.reorder_edges([e3, e1, e2])
# If you already know the axis values, you can equivalently do
# a.reorder_axes([2, 0, 1])
print(a.tensor.shape) # Should print (3, 1, 2)

NCON interface.

For a more compact specification of a tensor network and its contraction, there is ncon(). For example:

from tensornetwork import ncon
a = np.ones((2, 2))
b = np.ones((2, 2))
c = ncon([a, b], [(-1, 1), (1, -2)])
print(c)

Different backend support.

Currently, we support JAX, TensorFlow, PyTorch and NumPy as TensorNetwork backends. We also support tensors with Abelian symmetries via a symmetric backend, see the reference documentation for more details.

To change the default global backend, you can do:

tn.set_default_backend("jax") # tensorflow, pytorch, numpy, symmetric

Or, if you only want to change the backend for a single Node, you can do:

tn.Node(tensor, backend="jax")

If you want to run your contractions on a GPU, we highly recommend using JAX, as it has the closet API to NumPy.

Disclaimer

This library is in alpha and will be going through a lot of breaking changes. While releases will be stable enough for research, we do not recommend using this in any production environment yet.

TensorNetwork is not an official Google product. Copyright 2019 The TensorNetwork Developers.

Citation

If you are using TensorNetwork for your research please cite this work using the following bibtex entry:

@misc{roberts2019tensornetwork,
      title={TensorNetwork: A Library for Physics and Machine Learning}, 
      author={Chase Roberts and Ashley Milsted and Martin Ganahl and Adam Zalcman and Bruce Fontaine and Yijian Zou and Jack Hidary and Guifre Vidal and Stefan Leichenauer},
      year={2019},
      eprint={1905.01330},
      archivePrefix={arXiv},
      primaryClass={physics.comp-ph}
}

tensornetwork's People

Contributors

alewis avatar amilsted avatar coryell avatar dncolomer avatar gecrooks avatar gevenbly avatar gilas5000 avatar illgamhoduck avatar jackhidary avatar jensenjhwang avatar jziub avatar katolikyan avatar kosehy avatar kshithijiyer avatar lingxz avatar luigiovanni avatar mganahl avatar michaelmarien avatar mikemerz avatar olgok avatar orialb avatar patil2099 avatar pedersor avatar ritwik12 avatar sebgrijalva avatar shreeju avatar stavros11 avatar travmatth avatar viathor avatar wkostuch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensornetwork's Issues

Relative imports under tensornetwork/

Could we make it possible to use our code without installing it using setup.py? If we stick to relative imports within the tensornetwork/ folder, such as import decompositions in place of from tensornetwork import decompositions, it would be possible to e.g. run the tests directly from source.

Autograph issues

When tf.function(autograph=True), the compile times for code that uses contract_between skyrockets. This is bad because autograph=True is the default for tensorflow>=1.14.0.

We need to figure out where the bug lies, either on our end or on the tensorflow end.

Tests are very slow

The tests under experiments/MERA are very slow and cause the travis build to take >10 minutes. The tests are also very slow when running on a laptop, which inhibits external contributors. We need to find a way to make these tests faster.

__init__() takes 2 positional arguments but 4 were given

My environment: Ubuntu 18.04, tensorflow=='2.0.0-beta0'
Code from arxiv.1905.1330:

net = tensornetwork.TensorNetwork()
a = net.add_node(tf.ones(2))
b = net.add_node(tf.ones(2))
edge = net.connect(a[0], b[0])
c = net.contract(edge)
print(c.get_tensor().numpy()) # Should print 2.0

Error message:

TypeError                                 Traceback (most recent call last)
<ipython-input-2-cb4097656450> in <module>
      3 b = net.add_node(tf.ones(2))
      4 edge = net.connect(a[0], b[0])
----> 5 c = net.contract(edge)
      6 print(c.get_tensor().numpy()) # Should print 2.0
~/miniconda3/lib/python3.6/site-packages/tensornetwork/network.py in contract(self, edge, name)
    381       return self._contract_trace(edge, name)
    382     new_tensor = self.backend.tensordot(edge.node1.tensor, edge.node2.tensor,
--> 383                                         [[edge.axis1], [edge.axis2]])
    384     new_node = self.add_node(new_tensor, name)
    385     self._remove_edges(set([edge]), edge.node1, edge.node2, new_node)
~/miniconda3/lib/python3.6/site-packages/tensornetwork/backends/tensorflow/tensorflow_backend.py in tensordot(self, a, b, axes)
     41 
     42   def tensordot(self, a: Tensor, b: Tensor, axes: Sequence[Sequence[int]]):
---> 43     return self.tensordot2.tensordot(a, b, axes)
     44 
     45   def reshape(self, tensor: Tensor, shape: Tensor):
~/miniconda3/lib/python3.6/site-packages/tensornetwork/backends/tensorflow/tensordot2.py in tensordot(a, b, axes, name)
    227     return axes[0], axes[1]
    228 
--> 229   with tf.name_scope(name, "Tensordot", [a, b, axes]) as _name:
    230     a = tf.convert_to_tensor(a, name="a")
    231     b = tf.convert_to_tensor(b, name="b")
TypeError: __init__() takes 2 positional arguments but 4 were given
  • Edited by Thenerdstation for formatting.

TF backend should avoid unneeded shape-related ops

Methods like flatten_edges currently manipulate tensor shapes via tf.shape() and other TF ops acting on the resulting tensors.

Each TF tensor or op increases the run-time overhead in eager mode and increases the graph build and optimization time in graph mode. Fortunately, these shape-related ops can be avoided when the shapes of the tensors involved are fully defined, which is always true in eager mode and is often true when building a graph.

For reference: Within TF, ops like tensordot and einsum go to some lengths to do these optimizations.

import tensornetwork error

Installed Docker and tensornetwork as instructed on the install page. When I go to a project in python I am unable to to get the tensornetwork framework to get recognized for use. Has anyone else had similar issues?

Add "signature" to nodes.

Give nodes a unique "signature". This is required to easily map nodes in the TensorNetwork to nodes in the ContractionTree.

For the signature, we can likely reuse the node increment number. We will also have to add a dictionary where we can lookup a node from it's signature for fast runtime.

von Neumann entropy

Consider a tensor network that defines a state on some dangling legs. I would like to be able to specify a subset of those legs and output the von Neumann entropy S = - tr ( rho log rho), where rho is the normalized density matrix of those legs.

TF's graph optimizer could be smarter about transpositions

TF is not very smart about transpositions. For example, the graph optimizer could do the following:

  1. Merge adjacent transpositions (each transpose op costs time and creates a new tensor)
  2. Pull transpositions through reshapes where possible (transposing with fewer dimensions should be faster, and is currently often much see #32)
  3. Merge transpose ops with matmul ops where possible (turn a matrix transpose into a transpose argument in matmul)

Currently, none of these appear to happen.

TF issue: tensorflow/tensorflow#28933

Remove the Assert{In,True,Not,...} calls in tensornetwork_test

We want to inline all of the Assert* calls in tensornetwork_test.

These are left over from when we transitioned from googletest to pytest. They cause the error messages to be not as nice as they could be. It's mostly just tedious work, though I'm sure someone could write a short vim command to get it done.

Tensor types and backends

If we would like to have the same main network tests (like tensornetwork_test.py) for different backends, all based on np.testing.assert_allclose, then it might be useful to have a backend method for get_tensor. This should return a numpy array for all backends, something like the inverse of convert_to_tensor that we already have. It might be tricky to do for TF graph mode where it is a bit harder to go from tensor to numpy, so we will probably need different tests for this case (as we have now). A get_shape method would also be handy for the same reason, as @Thenerdstation already mentioned.

split_node drops more singular values than it should when using max_truncation_err

this is in the tensorflow backend, but I assume it is also true for the other backends.

diagonal_array = np.array([[2.0, 0.0, 0.0],
                           [0.0, 2.5, 0.0],
                           [0.0, 0.0, 1.5]]) 
net = tensornetwork.TensorNetwork()
a = net.add_node(diagonal_array)
u, vh, truncation_error = net.split_node(
    a, left_edges=[a[0]], right_edges=[a[1]], max_truncation_err=2.7)
print(truncation_error.numpy())

This outputs

[2.  1.5]

When really it should have only dropped the 1.5 singular value.

More flexible add_node()

Currently, add_node() requires one to specify a tensor. Certain types of nodes are more efficient to manipulate based on their abstract properties than based on their component representation (e.g. copy tensors, Levi-Civita tensors).

We should generalize the API to allow one to add nodes into the network without explicitly computing those tensors. For example, we could re-define add_node() to take a Node instead of a Tensor and leave Node construction to the caller. This would allow us to do things such as:

tensor = np.array(...)
net.add_node(tn.Node(tensor))

net.add_node(tn.CopyNode(rank=2, dimension=3))

net.add_node(tn.LeviCivitaNode(rank=4))

Of course in practice most nodes are created from multidimensional arrays, so for convenience we could also offer a shortcut e.g. add_tensor() such that the following two code snippets are equivalent:

shortcut:

net.add_tensor(tensor)

long form:

net.add_node(tn.Node(tensor))

Copy a TensorNetwork without copying the Tensors

I would like a convenient way to copy a TensorNetwork without copying the Tensors associated with nodes. I don't think I can just use deepcopy for that, since it would copy the Tensors too?

This could be useful for algorithms involving finite tensor networks. We could set a network once for e.g. a matrix product state, then create copies before contracting subnetworks for various computations (e.g. compute the norm or energy).

Alternative contraction path algorithms

The opt_einsum repository has been working on paths for tensor contractions for a few years now and have come up with a variety of algorithms tuned for several use cases. A few algorithms to check out:

  • Deterministic algorithms can be found here
  • Some exploration of non-determistic algorithms here
  • Greedy algorithms with custom heuristics here
  • Shared intermediate algorithms here

We originally powered the optimization algorithms in the NumPy einsum function, but have expanded to a number of other projects such as Pyro. Hopefully you may find these algorithms useful as well!

Add PyTorch backend.

Once #52 is submitted, it would be nice to add a pytorch backend.

TF eager and pytorch are very similar, so someone could use the TensorFlowBackend as an example.

Replacement for tf.einsum

Our new contraction code that exploits copy tensors builds an einsum expression and then passes it to a backend for execution. In some cases, e.g. when expressing the SWAP quantum gate as three CNOTs, the expression will contain repeated index, e.g. 'aa,a->'. Unlike numpy and JAX, Tensorflow does not currently support this, see tf.einsum.

We need a workaround for this.

One approach would be to use something else than einsum. Another would be to provide a wrapper in the backend class that implements the more flexible einsum we need in terms of the weaker one offered by Tensorflow.

Could you please give an example of Image Classification Training using TN?

" The construction of these special MPS tensor networks amounts to the training of the model, the details of which can be found in the work of Stoudenmire and Schwab [35], as well as in our TensorNetwork implementation [47]."

I have read the paper and found that the tensor network can be used in Image Classification.But I do not know the detail. And the "TensorNetwork on TensorFlow: An Application to Machine Learning" is not available now.

So could you please give an example of Image Classification Training using TN(TT format)?

Add run_on_device method to TensorflowBackend

Currently, only TensorFlow supports remote execution, but JAX has it planned (numpy clearly never will, so just throw an error or no-op).

I think we will use it like this

with self.backend.run_on_device("gpu1"):
  a = self.contract(...)

with self.backend.run_on_device("gpu2"):
  b = self.contract(...)
  c = self.contract_between(a, b)

So a and b will be calculated in parallel. Afterwards, gpu1 sends a to gpu2 to calculate c.

This is the current way to define operation placement in tensorflow. I am open to other API designs though.

Create ContractionNetwork

Create a ContractionNetwork class.

This will inherit the TensorNetwork class but with a hard coded ShellBackend. This will also build the ContractionTree from net.contract_* calls.

Tensor network with symmetries

Some tensor networks can have global symmetries like conserved U(1) of SU(2) charge.
The symmetry imposes a sparse block structure on the individual tensors, which can be exploited to speed up contractions and decompositions, and reduce memory requirements.

One way of implementing symmetries is to wrap the blocks together with some additional information into a new class, and provide an API for basic operations. A (non exhaustive)
list of operations that would be needed are:

  • contraction of two block-sparse tensors
  • reshaping (i.e. merging and splitting indices) of a block-sparse tensor
  • matrix decompositions (i.e . SVD, QR, eigen-decomps, ...)
  • conjugation, addition, ... (all vector space operations)

We could add a new class SparseNode, and a method net.add_sparse_node(sparse_tensor),
and delegate contraction details to the node classes, such that node1.tensordot(node2, edges) works for both Node and SparseNode

I assume that this would result in changes in a lot of places in the API ...

Implement a Greedy contractor

Implement a contractor that will deterministically do the lowest cost contract_between call possible.

Ideally, it would also keep track of the lowest cost pairs in a heap, so that the runtime is nlog(n) instead of n^2.

Rename tensornetwork.py?

Currently, we have a module tensornetwork.tensornetwork, which I find unnecessarily ambiguous. Both tensornetwork and tensornetwork.tensornetwork provide TensorNetwork, but only one of them provides ncon, etc.

Can we rename tensornetwork.py to core.py or somesuch?

Derivatives and environments

If we define a tensor network for e.g. a scalar value (no dangling edges) and we want to compute the derivative of that number with respect to a tensor T in the network, one could use autodiff (depending on the backend). This is also known as computing the "environment" of T.

However, the first derivative is also given by the contraction of the same network with the T tensor removed. For doing this, it would be nice to have a remove_node() method that deletes a node from the network. Any dangling edges attached to that node are also removed. Any connected edges become dangling edges (modified in place, so that the edge objects persist).

This would be useful for a number of algorithms: One can define the network for, say, the energy of a quantum state once, then compute all required environments/derivatives using remove_node(). These are then used to minimize the energy.

Going further, if multiple environments of the same network are desired, it is possible to take the optimal contraction order for one environment and derive optimal contraction orders for all the others: https://arxiv.org/abs/1310.8023
Intermediate results can also often be reused for multiple environments.

One might imagine having a method environments([n1, n2, n3]) (where n1,n2,n3 are nodes) that efficiently computes environments of multiple tensors this way.

Thoughts?

Should ncon allow "0" as a label?

I skimmed the NCON paper the other day and realized why other implementations tend to disallow 0 as a label for contraction edges: It is because the reference implementation of NCON treats 0 specially - it is used as part of a separate contraction order argument to specify when to carry out outer products (sometimes it is more efficient to do outer products as intermediate steps). This raised two issues for me:

  1. Should we stop accepting 0 as a valid contraction edge label to ensure our ncon calls are compatible with the reference implementation? (Probably yes...)
  2. Should we implement the (rather obscure) interpretation of 0 in the contraction order? (Probably no.)

Elaborating on 2: It's a lot of work for possibly very little gain. Instead just support using a specific smart contractor, or a saved contraction order, in ncon(): e.g. ncon(..., contractor='optimal').

Thoughts?

Should 3sat be in examples rather than experiments?

We made a distinction in order to separate complicated code from relatively simple examples.

For example: The wavefunctions code is quite short and also directly uses the TN API rather than the ncon interface. The MPS, MERA, and Tree code is quite long and mostly uses ncon.

I feel like 3sat is short enough to go in examples. It also does not use ncon, making it a good example of how to use the API directly.

Improve Documentation

Currently, our readthedocs documentation is very lack luster. The formatting for some of the function docs looks pretty bad, and the layout for the sidebar needs improvement. We also need example code and some basic tutorials to help people get started (these can possibly be taken from the existing README).

Visualization of network

Is there any way to visualize network ?

This library based on TensoFlow, so it may be suitable to use tensorboard.
However I think we need to implement visualization code for tensoboard, and it will be hard.

Reduced Density Matrices

Consider a tensor network that defines a state on some dangling legs. I would like to be able to select a subset of those legs and output the reduced density matrix of those legs.

Create Bucket elimination algorithm

Bucket elimination is a fast algorithm that works well for networks with small "tree width".

Positive examples include: TTN, quantum circuits, MPS, and MERA.

Negative examples include: SAT TensorNetwork.

Seeing how most of our work involves the former, we should implement this algorithm asap. @viathor has been working on a first version.

Parallelized edge contraction

This might be related to the ContractionNetwork issues: It would be good to have a method that allows to contract edges in parallel.

For example, assume we want to multiply pairs of matrices in a chain:

mat = np.random.random([1000, 20, 20])
result = tf.stack([tf.matmul(mat[i], mat[i + 1]) for i in range(0, 1000, 2)])

to get a (500, 20, 20) tensor. Doing result = tf.matmul(mat[::2], mat[1::2]) is ~x10 faster. The API would be something like net.parallelized_contract([list of edges]) .

axis_names differ from Edge.name after contraction

Currently, when building a network and giving names to all node-axis and all contraction edges, the resulting node after calling net.contract(e) for all Edges e has default axis_names, while the Edges of the node are correctly labeled by the original labels.

D=4
A = np.random.rand(D,D,D,D)
B = np.random.rand(D,D,D,D)
net = tn.TensorNetwork()
n1 = net.add_node(A,axis_names=['A1', 'A2', 'A3', 'A4'])
n2 = net.add_node(B,axis_names=['B1', 'B2', 'B3', 'B4'])
e1 = net.connect(n1[0], n2[0],name='e1')
e2 = net.connect(n1[2], n2[3],name='e2')
net.contract(e1)
out = net.contract(e2)

print(out.axis_names) #prints ['__Edge_8', '__Edge_9', '__Edge_10', '__Edge_11']
print([e.name for e in out.get_all_edges()]) #prints ['A2', 'A4', 'B2', 'B3']

Same for contract_between

import ncon in experiments/MPS/*.py seems broken

In current head of master, these import ncon statements in experiments/MPS/*.py seem to be broken possibly due to the change of filename from ncon to ncon_interface. I also cannot see why using sys.path.append("../") instead of sys.path.append("../../tensornetwork") in these files.

There is also no *_test file in MPS dir. Pytest should report the issue if there is any test py.

Multi backend support

Currently, we have hard coded dependency for tensorflow, but this need not be the case. Many external researchers use JAX, Pytorch, and even raw numpy and might not feel comfortable switching over to tensorflow just to experiment with tensor networks.

Our goal is to get more people using TensorNetworks for ML and physics research and to have these computations run on accelerated hardware. Tensorflow is one option to achieve this, but maybe not the best for some researchers.

The only things we use from tensorflow are tensordot, svd, reshape, transpose and array slicing, all of which exist in JAX/Pytorch/numpy.

I purpose we do something similar to what Keras does and allow users to hotswap these backends without affecting the public API. (We've already designed the API very well to abstract as much tensorflow away as possible).

This can be done with a Backend class that can be inherited to make TensorflowBackend, JaxBackend etc. We would only need to support the functions we actually use, which is quiet small.

A user could set the backend one of two ways. Either by doing tensornetwork.set_backend("tensorflow") for global backends or with tensornetwork.TensorNetwork(backend="tensorflow") for network specific backends (which would be very useful for benchmarking!)

In side of TensorNetwork we would just do self.backend.tensordot instead of tf.tensordot.

Overall, not a lot of work for possibly increasing our user base substantially.

SVD of a matrix with numpy backend returns wrong results

If I do SVD of a matrix, M = USV^{dag}, using the TensorNetwork package and numpy backend, I get wrong results.

The source of this error seems to be that the underlying tensor of V returned by split_node_full_svd depends on the backend chosen. Specifically, it's the adjoint when using tensorflow but it's just V when using numpy. A code example follows.

Using numpy, if I enter

tensornetwork.set_default_backend("numpy")
net = tensornetwork.TensorNetwork()
tensor = [[5.0,2.0,1.0],[2.0,5.0,1.0],[0.4,0.2,0.3]]
node = net.add_node(tensor)
(u,s,v,terr) = net.split_node_full_svd(node,[node[0]],[node[1]])
print(v.tensor)
print( (u @ s @ v).tensor)

I get

[[-0.69395487  0.70553197 -0.1437055 ]
 [-0.6917008  -0.70867212 -0.13904615]
 [-0.19994158  0.00290945  0.9798035 ]]
[[ 2.04827835 -5.06276861  0.41584831]
 [ 4.97540918 -2.05133595  1.01849117]
 [ 0.19076583 -0.4353305   0.25317139]]

which is wrong.

Using tensorflow, if I enter

tensornetwork.set_default_backend("tensorflow")
net = tensornetwork.TensorNetwork()
tensor = [[5.0,2.0,1.0],[2.0,5.0,1.0],[0.4,0.2,0.3]]
node = net.add_node(tensor)
(u,s,v,terr) = net.split_node_full_svd(node,[node[0]],[node[1]])
sess= tf.Session()
print(v.tensor.eval(session=sess))
print( (u @ s @ v).tensor.eval(session=sess))

I get

[[ 0.6939549   0.6917008   0.19994158]
 [-0.705532    0.7086722  -0.00290945]
 [-0.14370549 -0.13904615  0.97980356]]
[[5.000002   2.0000005  1.0000004 ]
 [2.000001   5.000002   1.0000004 ]
 [0.40000013 0.20000002 0.30000013]]

which is correct.

Add tests for the examples and experiments

We should have some simple tests for this stuff (but not huge computations) that are automatically run by pytest. That way things won't get incompatible (as they already have at least once). Currently only 3sat has tests (AFAIK).

Inner and outer shape

The number of legs and the dimensions of vector spaces corresponding to each leg are determined by the shape property of the tensor encapsulated in a node. Certain types of tensors, e.g. diagonal ones resulting from SVD, can be manipulated more efficiently if the distinction is made between their inner and outer shapes. The former determines the shape of the array that holds the data (e.g. 1-D array for a diagonal matrix) and the latter determines the legs of the tensor (e.g. two legs for a matrix, whether diagonal or not).

Note that a tensor D whose inner and outer shapes differ can be thought of as a tensor network consisting of D and a few copy tensors that "adjust" its outer shape.

LSTM

Dear Sir,
I want to use TensorFlow and LSTM to perform regression prediction on time series data and accelerate it to reduce the time. However I am not familiar with TensorNetwork, could I use it to implement what I need? If it is ok, what should I do to implement it? Thank you!

RFC: Overload * or @ for Nodes

I was wondering about convenient ways of manually specifying a contraction order in terms of node pairs. Now I'm thinking it would be cool to overload either the * or, perhaps better, the @ operator for Node objects so that it carries out contract_between(), returning the result.

Here is an example:

tn = TensorNetwork()
a = tn.add_node(a_tensor)
b = tn.add_node(b_tensor)
c = tn.add_node(c_tensor)

# assume tensors are matrices and set up `trace(a @ b @ c)`
tn.connect(a[1], b[0])
tn.connect(b[1], c[0])
tn.connect(c[1], a[0])

res = (a @ b) @ c  # equivalent to `res = tn.contract_between(tn.contract_between(a, b), c)`

This syntax is so much nicer than lots of contract_between(), mainly because one can plausibly make the contraction a one-liner and avoid having to keep track of intermediate nodes.

It would require Nodes to know which network they belong to.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.