Git Product home page Git Product logo

xanaduai / mrmustard Goto Github PK

View Code? Open in Web Editor NEW
73.0 13.0 22.0 7 MB

A differentiable bridge between phase space and Fock space

Home Page: https://mrmustard.readthedocs.io/

License: Apache License 2.0

Python 96.06% Makefile 0.16% Dockerfile 0.07% Shell 0.09% Julia 3.61%
quantum quantum-computing quantum-circuits optimization quantum-machine-learning quantum-optics differentiable-programming photonics photonics-circuits photonics-computing

mrmustard's Introduction

Logo Logo

Apache-2.0 Actions Status Python version

Mr Mustard is a differentiable simulator with a sophisticated built-in optimizer, that operates seamlessly across phase space and Fock space. It is built on top of an agnostic autodiff interface, to allow for plug-and-play backends (numpy by default).

Mr Mustard supports:

  • Phase space representation of Gaussian states and Gaussian channels on an arbitrary number of modes
  • Exact Fock representation of any Gaussian circuit and any Gaussian state up to an arbitrary cutoff
  • Riemannian optimization on the symplectic group (for Gaussian transformations) and on the unitary group (for interferometers)
  • Adam optimizer for euclidean parameters
  • Single-mode gates (parallelizable):
    • squeezing, displacement, phase rotation, attenuator, amplifier, additive noise, phase noise
  • Two-mode gates:
    • beam splitter, Mach-Zehnder interferometer, two-mode squeezing, CX, CZ, CPHASE
  • N-mode gates (with dedicated Riemannian optimization):
    • Interferometer (unitary), RealInterferometer (orthogonal), Gaussian transformation (symplectic)
  • Single-mode states (parallelizable):
    • Vacuum, Coherent, SqueezedVacuum, Thermal, Fock
  • Two-mode states:
    • TMSV (two-mode squeezed vacuum)
  • N-mode states:
    • Gaussian
  • Photon number moments and entropic measures
  • PNR detectors and Threshold detectors with trainable quantum efficiency and dark counts
  • Homodyne, Heterodyne and Generaldyne measurements
  • Composable circuits
  • Plug-and-play backends (numpy as default)
  • An abstraction layer XPTensor for seamless symplectic algebra (experimental)

Increased numerical stability using Julia [optional]

Converting phase space objects to Fock space can be numerically unstable due to accumulating floating point errors. To resolve this, the conversion can be performed with extended-precision arithmetic. To use this feature, an installation of Julia is required (version 1.9.3 recommended). If no valid version of Julia is found, it will be installed automatically before trying to run any Julia code.

The lab module

The lab module contains things you'd find in a lab: states, transformations, measurements, circuits. States can be used at the beginning of a circuit as well as at the end, in which case a state is interpreted as a measurement (a projection onto that state). Transformations are usually parametrized and map states to states. The action on states is differentiable with respect to the state and to the gate parameters.

1. States and Gates

Here are a few examples of states and gates:

import numpy as np
from mrmustard.lab import *

vac = Vacuum(num_modes=2)        # 2-mode vacuum state
coh = Coherent(x=0.1, y=-0.4)    # coh state |alpha> with alpha = 0.1 - 0.4j
sq  = SqueezedVacuum(r=0.5)      # squeezed vacuum state
g   = Gaussian(num_modes=2)      # 2-mode Gaussian state with zero means
fock4 = Fock(4)                  # fock state |4>

D  = Dgate(x=1.0, y=-0.4)         # Displacement by 1.0 along x and -0.4 along y
S  = Sgate(r=0.5)                 # Squeezer with r=0.5
R  = Rgate(angle=0.3)             # Phase rotation by 0.3
A  = Amplifier(gain=2.0)          # noisy amplifier with 200% gain
L  = Attenuator(0.5)              # pure loss channel with 50% transmissivity
N  = AdditiveNoise(noise=0.1)     # additive noise with noise level 0.1

BS = BSgate(theta=np.pi/4)          # 50/50 beam splitter
S2 = S2gate(r=0.5)                  # two-mode squeezer
MZ = MZgate(phi_a=0.3, phi_b=0.1)   # Mach-Zehnder interferometer
I  = Interferometer(8)              # 8-mode interferometer
L  = Attenuator(0.5)                # pure lossy channel with 50% transmissivity
A  = Amplifier(gain=2.0, nbar=1.0)  # noisy amplifier with 200% gain

The repr of single-mode states shows the Wigner function: Screen Shot 2021-12-06 at 1 31 17 PM

cat_amps = Coherent(2.0).ket([20]) + Coherent(-2.0).ket([20])
cat_amps = cat_amps / np.linalg.norm(cat_amps)
cat = State(ket=cat_amps)
cat

Screen Shot 2021-12-06 at 8 27 06 PM

States (even those in Fock representation) are always compatible with gates:

cat >> Sgate(0.5)  # squeezed cat

Screen Shot 2021-12-07 at 2 03 14 PM

2. Gates and the right shift operator >>

Applying gates to states looks natural, thanks to python's right-shift operator >>:

displaced_squeezed = Vacuum(1) >> Sgate(r=0.5) >> Dgate(x=1.0)

If you want to apply a gate to specific modes, use the getitem format. Here are a few examples:

D = Dgate(y=-0.4)
S = Sgate(r=0.1, phi=0.5)
state = Vacuum(2) >> D[1] >> S[0]  # displacement on mode 1 and squeezing on mode 0

BS = BSgate(theta=1.1)
state = Vacuum(3) >> BS[0,2]  # applying a beamsplitter to modes 0 and 2
state = Vacuum(4) >> S[0,1,2]  # applying the same Sgate in parallel to modes 0, 1 and 2 but not to mode 3

3. Circuit

When chaining just gates with the right-shift >> operator, we create a circuit:

X8 = Sgate(r=[1.0] * 4) >> Interferometer(4)
output = Vacuum(4) >> X8

# lossy X8
noise = lambda: np.random.uniform(size=4)
X8_noisy = (Sgate(r=0.9 + 0.1*noise(), phi=0.1*noise())
                >> Attenuator(0.89 + 0.01*noise())
                >> Interferometer(4)
                >> Attenuator(0.95 + 0.01*noise())
               )

# 2-mode Bloch Messiah decomposition
bloch_messiah = Sgate(r=[0.1,0.2]) >> BSgate(theta=-0.1, phi=2.1) >> Dgate(x=[0.1, -0.4])
my_state = Vacuum(2) >> bloch_messiah

4. Measurements

In order to perform a measurement, we use the left-shift operator, e.g. coh << sq (think of the left-shift on a state as "closing" the circuit).

leftover = Vacuum(4) >> X8 << SqueezedVacuum(r=10.0, phi=np.pi)[2]  # a homodyne measurement of p=0.0 on mode 2

Transformations can also be applied in the dual sense by using the left-shift operator <<:

Attenuator(0.5) << Coherent(0.1, 0.2) == Coherent(0.1, 0.2) >> Amplifier(2.0)

This has the advantage of modelling lossy detectors without applying the loss channel to the state going into the detector, which can be overall faster e.g. if the state is kept pure by doing so.

5. Detectors

There are two types of detectors in Mr Mustard. Fock detectors (PNRDetector and ThresholdDetector) and Gaussian detectors (Homodyne, Heterodyne, Generaldyne).

The PNR and Threshold detectors return an array of unnormalized measurement results, meaning that the elements of the array are the density matrices of the leftover systems, conditioned on the outcomes:

results = Gaussian(2) << PNRDetector(efficiency = 0.9, modes = [0])
results[0]  # unnormalized dm of mode 1 conditioned on measuring 0 in mode 0
results[1]  # unnormalized dm of mode 1 conditioned on measuring 1 in mode 0
results[2]  # unnormalized dm of mode 1 conditioned on measuring 2 in mode 0
# etc...

The trace of the leftover density matrices will yield the success probability. If multiple modes are measured then there is a corresponding number of indices:

results = Gaussian(3) << PNRDetector(efficiency = [0.9, 0.8], modes = [0,1])
results[2,3]  # unnormalized dm of mode 2 conditioned on measuring 2 in mode 0 and 3 in mode 1
# etc...

Set a lower settings.PNR_INTERNAL_CUTOFF (default 50) to speed-up computations of the PNR output.

6. Comparison operator ==

States support the comparison operator:

>>> bunched = (Coherent(1.0) & Coherent(1.0)) >> BSgate(np.pi/4)
>>> bunched.get_modes(1) == Coherent(np.sqrt(2.0))
True

As well as transformations (gates and circuits):

>>> Dgate(np.sqrt(2)) >> Attenuator(0.5) == Attenuator(0.5) >> Dgate(1.0)
True

7. State operations and properties

States can be joined using the & (and) operator:

Coherent(x=1.0, y=1.0) & Coherent(x=2.0, y=2.0)  # A separable two-mode coherent state

s = SqueezedVacuum(r=1.0)
s4 = s & s & s & s   # four squeezed states

Subsystems can be accessed via get_modes:

joint = Coherent(x=1.0, y=1.0) & Coherent(x=2.0, y=2.0)
joint.get_modes(0)  # first mode
joint.get_modes(1)  # second mode

swapped = joint.get_modes([1,0])

8. Fock representation

The Fock representation of a State is obtained via .ket(cutoffs) or .dm(cutoffs). For circuits and gates (transformations in general) it's .U(cutoffs) or .choi(cutoffs), if available. The Fock representation is exact and it doesn't break differentiability. This means that one can define cost functions on the Fock representation and backpropagate back to the phase space representation.

# Fock representation of a coherent state
Coherent(0.5).ket(cutoffs=[5])   # ket
Coherent(0.5).dm(cutoffs=[5])    # density matrix

Dgate(x=1.0).U(cutoffs=[15])  # truncated unitary matrix
Dgate(x=1.0).choi(cutoffs=[15])  # truncated choi tensor

States can be initialized in Fock representation and used as any other state:

my_amplitudes = np.array([0.5, 0.25, -0.5, 0.25, 0.25, 0.5, -0.25] + [0.0]*23)  # notice the buffer
my_state = State(ket=my_amplitudes)
my_state >> Sgate(r=0.5)  # just works

Screen Shot 2021-12-06 at 1 44 38 PM

Alternatively,

my_amplitudes = np.array([0.5, 0.25, -0.5, 0.25, 0.25, 0.5, -0.25])  # no buffer
my_state = State(ket=my_amplitudes)
my_state._cutoffs = [42]  # force the cutoff
my_state >> Sgate(r=0.5)  # works too

The physics module

The physics module contains a growing number of functions that we can apply to states directly. These are made out of the functions that operate on the representation of the state:

  • If the state is in Gaussian representation, then internally the physics functions utilize the physics.gaussian module.
  • If the state is in Fock representation, then internally the physics functions utilize the physics.fock module.

The math module

The math module is the backbone of Mr Mustard. Mr Mustard comes with a plug-and-play backends through a math interface. You can use it as a drop-in replacement for tensorflow or numpy and your code will be plug-and-play too!

Here's an example where the numpy backend is used.

import mrmustard.math as math

math.cos(0.1)  # numpy

In a different session, we can change the backend to tensorflow.

import mrmustard.math as math
math.change_backend("tensorflow")

math.cos(0.1)  # tensorflow

Optimization

The mrmustard.training.Optimizer uses Adam underneath the hood for the optimization of Euclidean parameters, a custom symplectic optimizer for Gaussian gates and states and a unitary/orthogonal optimizer for interferometers.

We can turn any simulation in Mr Mustard into an optimization by marking which parameters we wish to be trainable. Let's take a simple example: synthesizing a displaced squeezed state.

from mrmustard import math
from mrmustard.lab import Dgate, Ggate, Attenuator, Vacuum, Coherent, DisplacedSqueezed
from mrmustard.physics import fidelity
from mrmustard.training import Optimizer

math.change_backend("tensorflow")

D = Dgate(x = 0.1, y = -0.5, x_trainable=True, y_trainable=True)
L = Attenuator(transmissivity=0.5)

# we write a function that takes no arguments and returns the cost
def cost_fn_eucl():
    state_out = Vacuum(1) >> D >> L
    return 1 - fidelity(state_out, Coherent(0.1, 0.5))

G = Ggate(num_modes=1, symplectic_trainable=True)
def cost_fn_sympl():
    state_out = Vacuum(1) >> G >> D >> L
    return 1 - fidelity(state_out, DisplacedSqueezed(r=0.3, phi=1.1, x=0.4, y=-0.2))

# For illustration, here the Euclidean optimization doesn't include squeezing 
opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
opt.minimize(cost_fn_eucl, by_optimizing=[D])  # using Adam for D

# But the symplectic optimization always does
opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
opt.minimize(cost_fn_sympl, by_optimizing=[G,D])  # uses Adam for D and the symplectic opt for G

mrmustard's People

Contributors

aplund avatar dependabot[bot] avatar elib20 avatar ggulli avatar jan-provaznik avatar josh146 avatar kaspernielsen96 avatar mandrenkov avatar nquesada avatar rdprins avatar samferracin avatar sduquemesa avatar sylviemonet avatar thisac avatar timmysilv avatar zeyuen avatar zhiihan avatar ziofil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mrmustard's Issues

Be able to export the State figure

Before posting a feature request

  • I have searched exisisting GitHub issues to make sure the feature request does not already exist.

Feature details

Add one option that I can get the data to plot the State (Wigner function) or it can be exported directly.

Implementation

No response

How important would you say this feature is?

1: Not important. Would be nice to have.

Additional information

No response

`gaussian.trace` does not work

When running this basic example

from mrmustard.lab import *
from mrmustard.physics.gaussian import trace

state =  (Vacuum(2) >> Ggate(num_modes = 2))
mu1, cov1 = trace(state.means, state.cov, [1])

I get the following error

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/tmp/ipykernel_976900/1572327151.py in <module>
      3 
      4 state =  (Vacuum(2) >> Ggate(num_modes = 2))
----> 5 mu1, cov1 = trace(state.means, state.cov, [1])

~/Code/MrMustard/mrmustard/physics/gaussian.py in trace(cov, means, Bmodes)
    594     )
    595     return [max(1, int(i)) for i in cutoffs]
--> 596 
    597 
    598 def trace(cov: Matrix, means: Vector, Bmodes: Sequence[int]) -> Tuple[Matrix, Vector]:

~/Code/MrMustard/mrmustard/math/tensorflow.py in gather(self, array, indices, axis)
    131 
    132     def gather(self, array: tf.Tensor, indices: tf.Tensor, axis: int = None) -> tf.Tensor:
--> 133         return tf.gather(array, indices, axis=axis)
    134 
    135     def hash_tensor(self, tensor: tf.Tensor) -> int:

~/miniconda3/envs/mrmustard/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
    204     """Call target, and fall back on dispatchers if there is a TypeError."""
    205     try:
--> 206       return target(*args, **kwargs)
    207     except (TypeError, ValueError):
    208       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/miniconda3/envs/mrmustard/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py in gather_v2(params, indices, validate_indices, axis, batch_dims, name)
   5067               batch_dims=0,
   5068               name=None):
-> 5069   return gather(
   5070       params,
   5071       indices,

~/miniconda3/envs/mrmustard/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
    547                 'in a future version' if date is None else ('after %s' % date),
    548                 instructions)
--> 549       return func(*args, **kwargs)
    550 
    551     doc = _add_deprecated_arg_notice_to_docstring(

~/miniconda3/envs/mrmustard/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
    204     """Call target, and fall back on dispatchers if there is a TypeError."""
    205     try:
--> 206       return target(*args, **kwargs)
    207     except (TypeError, ValueError):
    208       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/miniconda3/envs/mrmustard/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py in gather(***failed resolving arguments***)
   5049     axis = batch_dims
   5050   if tensor_util.constant_value(axis) != 0:
-> 5051     return gen_array_ops.gather_v2(
   5052         params, indices, axis, batch_dims=batch_dims, name=name)
   5053   try:

~/miniconda3/envs/mrmustard/lib/python3.8/site-packages/tensorflow/python/ops/gen_array_ops.py in gather_v2(params, indices, axis, batch_dims, name)
   3806       return _result
   3807     except _core._NotOkStatusException as e:
-> 3808       _ops.raise_from_not_ok_status(e, name)
   3809     except _core._FallbackException:
   3810       pass

~/miniconda3/envs/mrmustard/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
   6939   message = e.message + (" name: " + name if name is not None else "")
   6940   # pylint: disable=protected-access
-> 6941   six.raise_from(core._status_to_exception(e.code, message), None)
   6942   # pylint: enable=protected-access
   6943 

~/miniconda3/envs/mrmustard/lib/python3.8/site-packages/six.py in raise_from(value, from_value)

InvalidArgumentError: Shape must be at least rank 2 but is rank 1 [Op:GatherV2]

concatenating fock states doesn't always work

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

This code returns one of the two output modes of a BSgate with a single photon going into the 1st mode

output = (Fock(1) & Vacuum(1)) >> BSgate(1.0)
output.get_modes(1)  # should be some a|0> + b|1>

Actual behavior

one mode is always vacuum, while the other is always a single photon, no matter the angle of the BSgate

Reproduces how often

deterministic

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2018-2021 Xanadu Quantum Technologies Inc.

Python version:            3.9.7
Platform info:             macOS-12.0.1-arm64-i386-64bit
Installation path:         /Users/filippo/Dropbox/Work/Xanadu/projects/MrMustard/mrmustard
Mr Mustard version:        0.3.0-dev
Numpy version:             1.21.4
Numba version:             0.53.1
Scipy version:             1.7.3
The Walrus version:        0.20.0-dev
TensorFlow version:        2.6.2
Torch version:             1.10.0

Source code

No response

Tracebacks

No response

Additional information

No response

Unsupported generator object used for indexing Tensors

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

States in fock representation should be indexed correctly with integers, slices (:), ellipsis (...), tf.newaxis (None) and scalar tf.int32/tf.int64 tensors.

Actual behavior

An unsupported generator object is used to index the arrays.

Reproduces how often

Every time the fidelity between states is calculated

System information

Python version:            3.8.5
Platform info:             Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.29
Installation path:         /home/sduquemesa/xanadu/MrMustard/mrmustard
Mr Mustard version:        0.2.0-dev
Numpy version:             1.19.5
Numba version:             0.53.1
Scipy version:             1.7.3
The Walrus version:        0.17.0
TensorFlow version:        2.6.2
Torch version:             None

Source code

import numpy as np
from mrmustard.lab import Fock, Attenuator, Sgate, BSgate, Vacuum
from mrmustard.physics import fidelity, normalize
from mrmustard.utils.training import Optimizer
from mrmustard import settings
from mrmustard.math import Math

S = Sgate(r=1,r_trainable=True,r_bounds=(0,2))
BS = BSgate(theta=np.pi/3,phi=np.pi/4,theta_trainable=True,phi_trainable=True)[0,1]

def cost_fn_pure():
    state_out = Vacuum(2) >> S >> BS << Fock([2], [1])
    return 1 - fidelity(state_out, Fock([2],[0]))

opt = Optimizer(euclidean_lr=0.01)

Tracebacks

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Input In [3], in <module>
----> 1 opt.minimize(cost_fn_pure, by_optimizing=[S,BS])

File ~/xanadu/MrMustard/mrmustard/utils/training.py:74, in Optimizer.minimize(self, cost_fn, by_optimizing, max_steps)
     72 with bar:
     73     while not self.should_stop(max_steps):
---> 74         cost, grads = math.value_and_gradients(cost_fn, params)
     75         update_symplectic(params["symplectic"], grads["symplectic"], self.symplectic_lr)
     76         update_orthogonal(params["orthogonal"], grads["orthogonal"], self.orthogonal_lr)

File ~/xanadu/MrMustard/mrmustard/math/tensorflow.py:317, in TFMath.value_and_gradients(self, cost_fn, parameters)
    306 r"""Computes the loss and gradients of the given cost function.
    307 
    308 Args:
   (...)
    314     The loss and the gradients.
    315 """
    316 with tf.GradientTape() as tape:
--> 317     loss = cost_fn()
    318 gradients = tape.gradient(loss, list(parameters.values()))
    319 return loss, dict(zip(parameters.keys(), gradients))

Input In [3], in cost_fn_pure()

File ~/xanadu/MrMustard/mrmustard/physics/__init__.py:39, in fidelity(A, B)
     37 if A.is_gaussian and B.is_gaussian:
     38     return gaussian.fidelity(A.means, A.cov, B.means, B.cov, settings.HBAR)
---> 39 return fock.fidelity(A.fock, B.fock, a_ket=A._ket is not None, b_ket=B._ket is not None)

File ~/xanadu/MrMustard/mrmustard/physics/fock.py:222, in fidelity(state_a, state_b, a_ket, b_ket)
    220 if a_ket and b_ket:
    221     min_cutoffs = (slice(min(a, b)) for a, b in zip(state_a.shape, state_b.shape))
--> 222     state_a = state_a[min_cutoffs]
    223     state_b = state_b[min_cutoffs]
    224     return math.abs(math.sum(math.conj(state_a) * state_b)) ** 2

File ~/xanadu/MrMustard/venv/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:206, in add_dispatch_support.<locals>.wrapper(*args, **kwargs)
    204 """Call target, and fall back on dispatchers if there is a TypeError."""
    205 try:
--> 206   return target(*args, **kwargs)
    207 except (TypeError, ValueError):
    208   # Note: convert_to_eager_tensor currently raises a ValueError, not a
    209   # TypeError, when given unexpected types.  So we need to catch both.
    210   result = dispatch(wrapper, args, kwargs)

File ~/xanadu/MrMustard/venv/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:1014, in _slice_helper(tensor, slice_spec, var)
   1012   new_axis_mask |= (1 << index)
   1013 else:
-> 1014   _check_index(s)
   1015   begin.append(s)
   1016   end.append(s + 1)

File ~/xanadu/MrMustard/venv/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:888, in _check_index(idx)
    883 dtype = getattr(idx, "dtype", None)
    884 if (dtype is None or dtypes.as_dtype(dtype) not in _SUPPORTED_SLICE_DTYPES or
    885     idx.shape and len(idx.shape) == 1):
    886   # TODO(slebedev): IndexError seems more appropriate here, but it
    887   # will break `_slice_helper` contract.
--> 888   raise TypeError(_SLICE_TYPE_ERROR + ", got {!r}".format(idx))

TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <generator object fidelity.<locals>.<genexpr> at 0x7f1a042acd60>

Additional information

No response

TensorFlow dependecy needs to be fixed <2.16.0

TensorFlow >=2.16.0 has been released a month ago. Installing MrMustard with pip install mrmustard will install TensorFlow 2.16.1 by default. This is problematic as new releases of TensorFlow dropped support for Keras 2.0 -- see the release note. Therefore, the following call

return tf.keras.optimizers.legacy.Adam(learning_rate=0.001)

will trigger an error. For a MWE, the error can easily be triggered with the optimization given in the README

MrMustard/README.md

Lines 252 to 285 in d31b70f

### Optimization
The `mrmustard.training.Optimizer` uses Adam underneath the hood for the optimization of Euclidean parameters, a custom symplectic optimizer for Gaussian gates and states and a unitary/orthogonal optimizer for interferometers.
We can turn any simulation in Mr Mustard into an optimization by marking which parameters we wish to be trainable. Let's take a simple example: synthesizing a displaced squeezed state.
```python
from mrmustard import math
from mrmustard.lab import Dgate, Ggate, Attenuator, Vacuum, Coherent, DisplacedSqueezed
from mrmustard.physics import fidelity
from mrmustard.training import Optimizer
math.change_backend("tensorflow")
D = Dgate(x = 0.1, y = -0.5, x_trainable=True, y_trainable=True)
L = Attenuator(transmissivity=0.5)
# we write a function that takes no arguments and returns the cost
def cost_fn_eucl():
state_out = Vacuum(1) >> D >> L
return 1 - fidelity(state_out, Coherent(0.1, 0.5))
G = Ggate(num_modes=1, symplectic_trainable=True)
def cost_fn_sympl():
state_out = Vacuum(1) >> G >> D >> L
return 1 - fidelity(state_out, DisplacedSqueezed(r=0.3, phi=1.1, x=0.4, y=-0.2))
# For illustration, here the Euclidean optimization doesn't include squeezing
opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
opt.minimize(cost_fn_eucl, by_optimizing=[D]) # using Adam for D
# But the symplectic optimization always does
opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
opt.minimize(cost_fn_sympl, by_optimizing=[G,D]) # uses Adam for D and the symplectic opt for G
```

As a temporary fix, I suggest to replace ^2.15.0 with ~2.15.0 in the following

MrMustard/pyproject.toml

Lines 52 to 59 in d31b70f

tensorflow = {version = "^2.15.0" }
tensorflow-macos = { version = "2.15.0", platform = "darwin", markers = "platform_machine=='arm64'" }
tensorflow-intel = { version = "^2.15.0", platform = "win32" }
# Disabled to prevent taking over GPU:
# tensorflow-cpu = [
# { version = "^2.15.0", platform = "linux", markers = "platform_machine!='arm64' and platform_machine!='aarch64'" },
# { version = "^2.15.0", platform = "darwin", markers = "platform_machine!='arm64' and platform_machine!='aarch64'" },]
tensorflow-cpu-aws = { version = "^2.15.0", platform = "linux", markers = "platform_machine=='arm64' or platform_machine=='aarch64'" }

Channels can only be applied at the end of a circuit

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

An Attenuator or Amplifier gate is applied on a single mode of the circuit.

Actual behavior

An Attenuator or Amplifier gates can only be applied at the end of the circuit, otherwise it raises the reported error.

Reproduces how often

Every time an Attenuator or Amplifier is applied somewhere other than the end of the circuit.

System information

Python version:            3.8.5
Platform info:             Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.29
Installation path:         /home/sduquemesa/xanadu/MrMustard/mrmustard
Mr Mustard version:        0.2.0-dev
Numpy version:             1.19.5
Numba version:             0.53.1
Scipy version:             1.7.3
The Walrus version:        0.17.0
TensorFlow version:        2.6.2
Torch version:             None

Source code

import mrmustard.lab as mml

circ = mml.Circuit()
circ = circ >> mml.Attenuator(0.932)[1] >> mml.BSgate(theta=0.627, phi=1.15)[0, 1]

S = circ.XYd[0]

Tracebacks

Input In [4], in <module>
      1 circ = mml.Circuit()
      2 circ = circ >> mml.Attenuator(0.932)[1] >> mml.BSgate(theta=0.627, phi=1.15)[0, 1]
----> 4 circ.XYd[0]

File ~/xanadu/MrMustard/mrmustard/lab/circuit.py:83, in Circuit.XYd(self)
     81         opd = opd.clone(len(op.modes), modes=op.modes)
     82     X = opX @ X
---> 83     Y = opX @ Y @ opX.T + opY
     84     d = opX @ d + opd
     85 return X.to_xxpp(), Y.to_xxpp(), d.to_xxpp()

File ~/xanadu/MrMustard/mrmustard/utils/xptensor.py:253, in XPTensor.__matmul__(self, other)
    251 if self.isMatrix and other.isMatrix:
    252     tensor, modes = self._mode_aware_matmul(other)
--> 253     return XPMatrix(tensor, like_1=self.like_1 and other.like_1, modes=modes)
    254 elif self.isMatrix and other.isVector:
    255     tensor, modes = self._mode_aware_matmul(other)

File ~/xanadu/MrMustard/mrmustard/utils/xptensor.py:541, in XPMatrix.__init__(self, tensor, like_0, like_1, modes)
    537     modes = tuple(
    538         list(range(s)) for s in tensor.shape[:2]
    539     )  # NOTE assuming that it isn't a coherence block
    540 like_0 = like_0 if like_0 is not None else not like_1
--> 541 super().__init__(tensor, like_0, isVector=False, modes=modes)

File ~/xanadu/MrMustard/mrmustard/utils/xptensor.py:85, in XPTensor.__init__(self, tensor, like_0, isVector, modes)
     83 self.tensor = tensor
     84 if not (set(modes[0]) == set(modes[1]) or set(modes[0]).isdisjoint(modes[1])):
---> 85     raise ValueError(
     86         "The inmodes and outmodes should either contain the same modes or be disjoint"
     87     )
     88 self.modes = modes

ValueError: The inmodes and outmodes should either contain the same modes or be disjoint

Additional information

Notice that the following code where the `Attenuator` is located at the end of the circuit will run successfully.


import mrmustard.lab as mml

# %%
circ = mml.Circuit()
circ = circ >> mml.BSgate(theta=0.627, phi=1.15)[0, 1] >> mml.Attenuator(0.932)[1] 

S = circ.XYd[0]

The issue is resolved if the Attenuator is applied explicitly to all the circuit modes, where only the mode of interest have a parameter value different than one (a trivial attenuator).

import mrmustard.lab as mml

circ = mml.Circuit()
circ = circ >> mml.Attenuator([0.932,1], modes = [0,1]) >> mml.BSgate(theta=0.627, phi=1.15)[0, 1]

S = circ.XYd[0]

In this case there is an overhead of adding trivial operations to modes, which in turn can impact calculation and circuit optimization performance.

All in all this seems to be an issue coming from the way in which modes are handled by the inner workings of Mr Mustard. In specific, when doing mode-aware math multiplication.

The circuit drawer displays parameters incorrectly

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Dgate(1.0)[2] >> Sgate([1., -1])
The circuit drawer should represent the Sgates as S(1.0, 0.0) and S(-1.0, 0.0)

Actual behavior

The circuit drawer displays both the Sgates as S(r, 0.0)

Reproduces how often

always

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2021 Xanadu Quantum Technologies Inc.

Python version:            3.9.18
Platform info:             Linux-5.15.0-92-generic-x86_64-with-glibc2.31
Installation path:         /home/samuele.ferracin/git/MrMustard/mrmustard
Mr Mustard version:        0.7.0.dev0
Numpy version:             1.23.5
Numba version:             0.56.4
Scipy version:             1.12.0
The Walrus version:        0.19.0
TensorFlow version:        2.14.0

Source code

No response

Tracebacks

No response

Additional information

No response

probability vectors do not update upon cutoff increase

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Length of probabilities equals cutoff

Actual behavior

the first cutoff sets the length once and for all

Reproduces how often

100%

System information

Python version:            3.9.16
Platform info:             Linux-5.15.49-linuxkit-aarch64-with-glibc2.31
Installation path:         /workspaces/MrMustard/mrmustard
Mr Mustard version:        0.5.0-dev
Numpy version:             1.23.5
Numba version:             0.56.4
Scipy version:             1.8.0
The Walrus version:        0.19.0
TensorFlow version:        2.10.1

Source code

from mrmustard.lab import *

fock_state = State(ket=Gaussian(1).ket())

print(len(fock_state.fock_probabilities([4])))   # prints 4
print(len(fock_state.fock_probabilities([8])))   # prints 4 again

Tracebacks

No response

Additional information

No response

Flag to turn on/off the progress bar in an optimization

Before posting a feature request

  • I have searched exisisting GitHub issues to make sure the feature request does not already exist.

Feature details

The ability to turn on/off the progress bar while running an optimization.

Implementation

I think the minimize function of the Optimizer class in utils/training.py can be modified as follows:

` def minimize(
self, cost_fn: Callable, by_optimizing: Sequence[Trainable], max_steps: int = 1000, progress_bar: bool = True
):
r"""Minimizes the given cost function by optimizing circuits and/or detectors.

    Args:
        cost_fn (Callable): a function that will be executed in a differentiable context in
            order to compute gradients as needed
        by_optimizing (list of circuits and/or detectors and/or gates): a list of elements that
            contain the parameters to optimize
        max_steps (int): the minimization keeps going until the loss is stable or max_steps are
            reached (if ``max_steps=0`` it will only stop when the loss is stable)
    """
    try:
        params = {
            "symplectic": math.unique_tensors(
                [p for item in by_optimizing for p in item.trainable_parameters["symplectic"]]
            ),
            "orthogonal": math.unique_tensors(
                [p for item in by_optimizing for p in item.trainable_parameters["orthogonal"]]
            ),
            "euclidean": math.unique_tensors(
                [p for item in by_optimizing for p in item.trainable_parameters["euclidean"]]
            ),
        }
        if progress_bar:
            bar = graphics.Progressbar(max_steps)
            with bar:
                while not self.should_stop(max_steps):
                    cost, grads = math.value_and_gradients(cost_fn, params)
                    update_symplectic(params["symplectic"], grads["symplectic"], self.symplectic_lr)
                    update_orthogonal(params["orthogonal"], grads["orthogonal"], self.orthogonal_lr)
                    update_euclidean(params["euclidean"], grads["euclidean"], self.euclidean_lr)
                    self.opt_history.append(cost)
                    bar.step(math.asnumpy(cost))
        else:
            while not self.should_stop(max_steps):
                cost, grads = math.value_and_gradients(cost_fn, params)
                update_symplectic(params["symplectic"], grads["symplectic"], self.symplectic_lr)
                update_orthogonal(params["orthogonal"], grads["orthogonal"], self.orthogonal_lr)
                update_euclidean(params["euclidean"], grads["euclidean"], self.euclidean_lr)
                self.opt_history.append(cost)
    except KeyboardInterrupt:  # graceful exit
        self.log.info("Optimizer execution halted due to keyboard interruption.")
        raise self.OptimizerInterruptedError() from None`

How important would you say this feature is?

3: Very important! Blocking work.

Additional information

No response

Modes are not concatenated correctly between fock and gaussian states

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

When concatenating a Fock and a Gaussian state its modes should be consecutive, meaning the following code

from mrmustard.lab import TMSV, Fock, SqueezedVacuum
state_fock = TMSV(r=-0.5,phi=0,modes=[0,1]) & Fock([1,2,3],modes=[2,3,4])
state_fock.modes

should produce the output [0, 1, 2, 3, 4]. Similarly, concatenating two gaussian state

state_guassian = TMSV(r=-0.5,phi=0,modes=[0,1]) & SqueezedVacuum(r=[1.0]*3,phi=[0.0]*3,modes=[2,3,4])
state_guassian.modes

should produce [0, 1, 2, 3, 4] as well.

Actual behavior

in the first case modes come out to be [0, 1, 4, 5, 6] whereas in the second they are [0, 1, 2, 3, 4] as expected.

Reproduces how often

always

System information

not relevant

Source code

from mrmustard.lab import TMSV, Fock, SqueezedVacuum
state_fock = TMSV(r=-0.5,phi=0,modes=[0,1]) & Fock([1,2,3],modes=[2,3,4])
state_fock.modes

Tracebacks

No response

Additional information

No response

Compute oscillator eigenstate for any angle

Before posting a feature request

  • I have searched exisisting GitHub issues to make sure the feature request does not already exist.

Feature details

In physics.fock, there is a function to compute the Fock representation of oscillator eigenstates for the q-quadrature, i.e. <n|q>. It is simple to compute <n|R(phi)|q>=exp(i*n)<n|q> as well, but I am not good enough at tensorflow to figure out how to implement this myself.

Implementation

Provide an extra phi argument to the function, and wherever the loop over n occurs, just add the extra complex phase to each term.

How important would you say this feature is?

2: Somewhat important. Needed this quarter.

Additional information

No response

ImportError: cannot import name 'builder' from 'google.protobuf.internal'

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

  1. conda create a new env with python=3.9.
  2. Install TensorFlow by Conda install tensorflow -> get tensorflow version 2.10.0
  3. Install MrMustard by pip install mrmustard.

Actual behavior

Always it has ImportError.

Reproduces how often

Everytime

System information

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[1], line 1
----> 1 import mrmustard as mm; mm.about()

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/mrmustard/__init__.py:92, in about()
     90 import numba
     91 import scipy
---> 92 import thewalrus
     93 import tensorflow
     95 # a QuTiP-style infobox

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/thewalrus/__init__.py:97
      1 # Copyright 2021 Xanadu Quantum Technologies Inc.
      2 
      3 # Licensed under the Apache License, Version 2.0 (the "License");
   (...)
     12 # See the License for the specific language governing permissions and
     13 # limitations under the License.
     14 r"""
     15 The Walrus
     16 ==========
   (...)
     95 ------------
     96 """
---> 97 import thewalrus.quantum
     98 import thewalrus.csamples
     99 import thewalrus.decompositions

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/thewalrus/quantum/__init__.py:140
    137 import warnings
    138 import functools
--> 140 from .fock_tensors import (
    141     pure_state_amplitude,
    142     state_vector,
    143     density_matrix_element,
    144     density_matrix,
    145     fock_tensor,
    146     probabilities,
    147     loss_mat,
    148     update_probabilities_with_loss,
    149     update_probabilities_with_noise,
    150     find_classical_subsystem,
    151     tvd_cutoff_bounds,
    152     n_body_marginals,
    153 )
    155 from .adjacency_matrices import (
    156     adj_scaling,
    157     adj_scaling_torontonian,
    158     adj_to_qmat,
    159 )
    161 from .gaussian_checks import (
    162     is_valid_cov,
    163     is_pure_cov,
   (...)
    166     is_symplectic,
    167 )

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/thewalrus/quantum/fock_tensors.py:27
     24 import numpy as np
     25 import dask
---> 27 from scipy.special import factorial as fac
     28 from numba import jit
     30 from ..symplectic import expand, is_symplectic, reduced_state

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/scipy/special/__init__.py:649
      1 """
      2 ========================================
      3 Special functions (:mod:`scipy.special`)
   (...)
    644 
    645 """
    647 from ._sf_error import SpecialFunctionWarning, SpecialFunctionError
--> 649 from . import _ufuncs
    650 from ._ufuncs import *
    652 from . import _basic

ImportError: dlopen(/Users/xanadu/miniforge3/envs/tf11/lib/python3.9/site-packages/scipy/special/_ufuncs.cpython-39-darwin.so, 0x0002): Library not loaded: '@rpath/liblapack.3.dylib'
  Referenced from: '/Users/xanadu/miniforge3/envs/tf11/lib/python3.9/site-packages/scipy/special/_ufuncs.cpython-39-darwin.so'
  Reason: tried: '/Users/xanadu/miniforge3/envs/tf11/lib/python3.9/site-packages/scipy/special/liblapack.3.dylib' (no such file), '/Users/xanadu/miniforge3/envs/tf11/lib/python3.9/site-packages/scipy/special/../../../../liblapack.3.dylib' (no such file), '/Users/xanadu/miniforge3/envs/tf11/lib/python3.9/site-packages/scipy/special/liblapack.3.dylib' (no such file), '/Users/xanadu/miniforge3/envs/tf11/lib/python3.9/site-packages/scipy/special/../../../../liblapack.3.dylib' (no such file), '/Users/xanadu/miniforge3/envs/tf11/bin/../lib/liblapack.3.dylib' (no such file), '/Users/xanadu/miniforge3/envs/tf11/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file)

Source code

from mrmustard.lab import *

Tracebacks

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[2], line 1
----> 1 from mrmustard.lab import *

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/mrmustard/lab/__init__.py:25
      1 # Copyright 2021 Xanadu Quantum Technologies Inc.
      2 
      3 # Licensed under the Apache License, Version 2.0 (the "License");
   (...)
     12 # See the License for the specific language governing permissions and
     13 # limitations under the License.
     15 r"""
     16 The lab module is all you need to construct and simulate photonic circuits.
     17 It contains the items you'd find in a lab:
   (...)
     22 * the Circuit class
     23 """
---> 25 from .circuit import *
     26 from .states import *
     27 from .gates import *

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/mrmustard/lab/circuit.py:26
     24 from typing import List, Tuple, Optional
     25 from mrmustard.types import Matrix, Vector
---> 26 from mrmustard.training import Parametrized
     27 from mrmustard.utils.xptensor import XPMatrix, XPVector
     28 from mrmustard.lab.abstract import Transformation

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/mrmustard/training/__init__.py:69
      1 # Copyright 2022 Xanadu Quantum Technologies Inc.
      2 
      3 # Licensed under the Apache License, Version 2.0 (the "License");
   (...)
     12 # See the License for the specific language governing permissions and
     13 # limitations under the License.
     15 """The optimizer module contains all logic for parameter and circuit optimization
     16 in Mr Mustard.
     17 
   (...)
     66 
     67 """
---> 69 from .parametrized import Parametrized
     70 from .optimizer import Optimizer

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/mrmustard/training/parametrized.py:22
     15 """This module contains the :class:`.Parametrized` class which acts as
     16 an abstract base class for all parametrized objects. Arguments of the
     17 class constructor generate a backend Tensor and are assigned to fields
     18 of the class.
     19 """
     21 from typing import Sequence, List, Generator, Any
---> 22 from mrmustard.math import Math
     23 from .parameter import create_parameter, Trainable, Constant, Parameter
     25 math = Math()

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/mrmustard/math/__init__.py:43
     40 from mrmustard import settings
     42 if importlib.util.find_spec("tensorflow"):
---> 43     from mrmustard.math.tensorflow import TFMath
     44 if importlib.util.find_spec("torch"):
     45     from mrmustard.math.torch import TorchMath

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/mrmustard/math/tensorflow.py:18
     15 """This module contains the Tensorflow implementation of the :class:`Math` interface."""
     17 import numpy as np
---> 18 import tensorflow as tf
     19 from thewalrus import hermite_multidimensional, grad_hermite_multidimensional
     21 from mrmustard.math.autocast import Autocast

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/tensorflow/__init__.py:37
     34 import sys as _sys
     35 import typing as _typing
---> 37 from tensorflow.python.tools import module_util as _module_util
     38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
     40 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import.

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/tensorflow/python/__init__.py:37
     29 # We aim to keep this file minimal and ideally remove completely.
     30 # If you are adding a new file with @tf_export decorators,
     31 # import it in modules_with_exports.py instead.
     32 
     33 # go/tf-wildcard-import
     34 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top
     36 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
---> 37 from tensorflow.python.eager import context
     39 # pylint: enable=wildcard-import
     40 
     41 # Bring in subpackages.
     42 from tensorflow.python import data

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/tensorflow/python/eager/context.py:29
     26 import numpy as np
     27 import six
---> 29 from tensorflow.core.framework import function_pb2
     30 from tensorflow.core.protobuf import config_pb2
     31 from tensorflow.core.protobuf import coordination_config_pb2

File ~/miniforge3/envs/tf11/lib/python3.9/site-packages/tensorflow/core/framework/function_pb2.py:5
      1 # -*- coding: utf-8 -*-
      2 # Generated by the protocol buffer compiler.  DO NOT EDIT!
      3 # source: tensorflow/core/framework/function.proto
      4 """Generated protocol buffer code."""
----> 5 from google.protobuf.internal import builder as _builder
      6 from google.protobuf import descriptor as _descriptor
      7 from google.protobuf import descriptor_pool as _descriptor_pool

ImportError: cannot import name 'builder' from 'google.protobuf.internal' (/Users/xanadu/miniforge3/envs/tf11/lib/python3.9/site-packages/google/protobuf/internal/__init__.py)

Additional information

I start a new user account on my Mac M1 chip so that I build a new env for MM.

Using a tuple for modes= argument gives ValueError

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

I generally use tuples for immutable structures, and so expected that modes could be given as such. But it seems to be only working for lists.

Actual behavior

Shouldn't give a value error and should work just as if the modes= argument were a list.

Reproduces how often

Always.

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.

Copyright 2021 Xanadu Quantum Technologies Inc.

Python version:            3.9.5

Platform info:             Linux-6.2.2-arch1-1-x86_64-with-glibc2.31

Installation path:         /usr/local/lib/python3.9/dist-packages/mrmustard

Mr Mustard version:        0.4.0

Numpy version:             1.23.5

Numba version:             0.56.4

Scipy version:             1.8.0

The Walrus version:        0.19.0

TensorFlow version:        2.10.1

Source code

from mrmustard.lab import BSgate
gate_tuple = BSgate(modes=(1,2)) >> BSgate(modes=(0,1))
gate_list = BSgate(modes=[1,2]) >> BSgate(modes=[0,1])
unitary = gate_list.U(cutoffs=(3,3,3)) # works
unitary = gate_tuple.U(cutoffs=(3,3,3))

Tracebacks

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[19], line 1
----> 1 unitary = gate_tuple.U(cutoffs=(3,3,3))

File /usr/local/lib/python3.9/dist-packages/mrmustard/lab/abstract/transformation.py:218, in Transformation.U(self, cutoffs)
    216 if not self.is_unitary:
    217     return None
--> 218 X, _, d = self.XYd
    219 return fock.wigner_to_fock_U(
    220     X if X is not None else math.eye(2 * self.num_modes),
    221     d if d is not None else math.zeros((2 * self.num_modes,)),
    222     shape=cutoffs * 2 if len(cutoffs) == self.num_modes else cutoffs,
    223 )

File /usr/local/lib/python3.9/dist-packages/mrmustard/lab/circuit.py:76, in Circuit.XYd(self)
     74 for op in self._ops:
     75     opx, opy, opd = op.XYd
---> 76     opX = XPMatrix.from_xxpp(opx, modes=(op.modes, op.modes), like_1=True)
     77     opY = XPMatrix.from_xxpp(opy, modes=(op.modes, op.modes), like_0=True)
     78     opd = XPVector.from_xxpp(opd, modes=op.modes)

File /usr/local/lib/python3.9/dist-packages/mrmustard/utils/xptensor.py:561, in XPMatrix.from_xxpp(cls, tensor, like_0, like_1, modes)
    559     tensor = math.reshape(tensor, [_ for n in tensor.shape for _ in (2, n // 2)])
    560     tensor = math.transpose(tensor, (1, 3, 0, 2))
--> 561 return XPMatrix(tensor, like_0, like_1, modes)

File /usr/local/lib/python3.9/dist-packages/mrmustard/utils/xptensor.py:538, in XPMatrix.__init__(self, tensor, like_0, like_1, modes)
    534     raise ValueError(f"like_0 and like_1 can't both be {like_0}")
    535 if not (
    536     isinstance(modes, tuple) and len(modes) == 2 and all(isinstance(m, list) for m in modes)
    537 ):
--> 538     raise ValueError("modes should be a tuple containing two lists (outmodes and inmodes)")
    539 if len(modes[0]) == 0 and len(modes[1]) == 0 and tensor is not None:
    540     if (
    541         tensor.shape[0] != tensor.shape[1] and like_0
    542     ):  # NOTE: we can't catch square coherences if no modes are specified

ValueError: modes should be a tuple containing two lists (outmodes and inmodes)

Additional information

This is actually on the develop branch at 1abad670dc76a83efd73f8189f3875bd7915c234.  But I see the same behaviour using v0.4.0.

X_matrix and Y_matrix for a state/operator

Before posting a feature request

  • I have searched exisisting GitHub issues to make sure the feature request does not already exist.

Feature details

For a state, if I want to get the attribute "Y_matrix", of course it is None, (XYd expression is more for a channel). So it will be a little bit wired when I "TAB" the attributes of a state and I can choose "X_matrix", "Y_matrix" and "XYd".

Implementation

When call for the possible attributes of an object, would it be better to identify which kind of object it is and show the list of attributes?

How important would you say this feature is?

1: Not important. Would be nice to have.

Additional information

No response

Optimization on states in the Fock basis are not working

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

I expect this code to run.

from mrmustard.training import Optimizer, Parametrized
from mrmustard.lab import *

squeezing = Sgate(r=1, r_trainable=True)

def cost_fn():
    return -(Fock(2)>>squeezing << Vacuum(1))

opt = Optimizer(euclidean_lr=0.05)
opt.minimize(cost_fn, by_optimizing=[squeezing], max_steps=100)

Actual behavior

It does not run.

Reproduces how often

Every time. Also occurs for the Dgate, BSgate, but not the Rgate. Likely due to custom gradients.

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2021 Xanadu Quantum Technologies Inc.

Python version:            3.10.11
Platform info:             Windows-10-10.0.19045-SP0
Installation path:         N/A
Mr Mustard version:        0.5.0-dev
Numpy version:             1.23.5
Numba version:             0.56.4
Scipy version:             1.10.1
The Walrus version:        0.20.0
TensorFlow version:        2.10.0

Source code

No response

Tracebacks

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[47], line 13
      9     return -(Fock(2)>>squeezing << Vacuum(1))
     12 opt = Optimizer(euclidean_lr=0.05)
---> 13 opt.minimize(
     14     cost_fn,
     15     by_optimizing=[squeezing],
     16     max_steps=100,
     17 )

File c:\path\mrmustard\training\optimizer.py:90, in Optimizer.minimize(self, cost_fn, by_optimizing, max_steps, callbacks)
     87 callbacks = self._coerce_callbacks(callbacks)
     89 try:
---> 90     self._minimize(cost_fn, by_optimizing, max_steps, callbacks)
     91 except KeyboardInterrupt:  # graceful exit
     92     self.log.info("Optimizer execution halted due to keyboard interruption.")

File c:\path\mrmustard\training\optimizer.py:118, in Optimizer._minimize(self, cost_fn, by_optimizing, max_steps, callbacks)
    109     self.callback_history["orig_cost"].append(orig_cost_fn())
    111 new_cost_fn, new_grads = self._run_callbacks(
    112     callbacks=callbacks,
    113     cost_fn=cost_fn,
    114     cost=cost,
    115     trainables=trainables,
    116 )
--> 118 self.apply_gradients(trainable_params.values(), new_grads or grads)
    119 self.opt_history.append(cost)
    120 bar.step(math.asnumpy(cost))

File c:\path\mrmustard\training\optimizer.py:144, in Optimizer.apply_gradients(self, trainable_params, grads)
    142 grads_and_vars = [(grad, p.value) for grad, p in grads_vars]
    143 update_method = param_update_method.get(param_type)
--> 144 update_method(grads_and_vars, param_lr)

File c:\path\mrmustard\training\parameter_update.py:68, in update_euclidean(grads_and_vars, euclidean_lr)
     66 """Updates the parameters using the euclidian gradients."""
     67 math.euclidean_opt.lr = euclidean_lr
---> 68 math.euclidean_opt.apply_gradients(grads_and_vars)

File c:\path\keras\optimizers\optimizer_v2\optimizer_v2.py:689, in OptimizerV2.apply_gradients(self, grads_and_vars, name, experimental_aggregate_gradients)
    648 def apply_gradients(
    649     self, grads_and_vars, name=None, experimental_aggregate_gradients=True
    650 ):
    651     """Apply gradients to variables.
    652 
    653     This is the second part of `minimize()`. It returns an `Operation` that
   (...)
    687       RuntimeError: If called in a cross-replica context.
    688     """
--> 689     grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars)
    690     var_list = [v for (_, v) in grads_and_vars]
    692     with tf.name_scope(self._name):
    693         # Create iteration if necessary.

File c:\path\keras\optimizers\optimizer_v2\utils.py:77, in filter_empty_gradients(grads_and_vars)
     75 if not filtered:
     76     variable = ([v.name for _, v in grads_and_vars],)
---> 77     raise ValueError(
     78         f"No gradients provided for any variable: {variable}. "
     79         f"Provided `grads_and_vars` is {grads_and_vars}."
     80     )
     81 if vars_with_empty_grads:
     82     logging.warning(
     83         (
     84             "Gradients do not exist for variables %s when minimizing the "
   (...)
     88         ([v.name for v in vars_with_empty_grads]),
     89     )

ValueError: No gradients provided for any variable: (['r:0'],). Provided `grads_and_vars` is ((None, ),).

Additional information

No response

Installation requires satisfaction of tensorflow-macos on Windows

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Expect to install the specified version of MrMustard using only system-relevant dependencies.

Actual behavior

Attempting

pip install mrmustard==0.4.1

on Windows 10 throws two errors related to tensorflow-macos and prevents installation.
Trying to circumvent this using the no-dependencies parameter,

pip install --no-dependencies mrmustard==0.4.1

leads to problems when importing MrMustard.

Reproduces how often

On every attempt of

pip install mrmustard==0.4.1

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2021 Xanadu Quantum Technologies Inc.

Python version:            3.10.9
Platform info:             Windows-10-10.0.19045-SP0
Installation path:         C:\Users\Windows\anaconda3\lib\site-packages\mrmustard
Mr Mustard version:        0.4.1
Numpy version:             1.23.5
Numba version:             0.56.4
Scipy version:             1.10.0
The Walrus version:        0.20.0
TensorFlow version:        2.13.0
Torch version:             1.12.1

#after installation with --no-dependencies

Source code

pip install mrmustard==0.4.1

Tracebacks

(base) C:\Users\Windows>pip install mrmustard==0.4.1
Collecting mrmustard==0.4.1
  Using cached mrmustard-0.4.1-py3-none-any.whl (140 kB)
Requirement already satisfied: numpy in c:\users\windows\anaconda3\lib\site-packages (from mrmustard==0.4.1) (1.23.5)
Requirement already satisfied: scipy in c:\users\windows\anaconda3\lib\site-packages (from mrmustard==0.4.1) (1.10.0)
Requirement already satisfied: numba in c:\users\windows\anaconda3\lib\site-packages (from mrmustard==0.4.1) (0.56.4)
Requirement already satisfied: thewalrus>=0.17.0 in c:\users\windows\anaconda3\lib\site-packages (from mrmustard==0.4.1) (0.20.0)
INFO: pip is looking at multiple versions of mrmustard to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement tensorflow-macos<=2.10.0 (from mrmustard) (from versions: none)
ERROR: No matching distribution found for tensorflow-macos<=2.10.0

Additional information

No response

Trying to stop a looped optimization does not work as expected

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

When running a for loop that is iterating over the Optimizer (e.g. to sweep through a parameter and find an optimal circuit), if I want to interrupt the program (I tried in Spyder or in Jupyter notebook), the whole program should stop. That is, the loop should stop, not just the current iteration of the loop.

Actual behavior

Only the optimization in the current iteration of the loop stops, and the next iteration of the loop begins.

Reproduces how often

100%

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2018-2021 Xanadu Quantum Technologies Inc.

Python version:            3.8.12
Platform info:             Windows-10-10.0.19042-SP0
Installation path:         C:\Users\Xanadu\.conda\envs\mmenv\lib\site-packages\mrmustard
Mr Mustard version:        0.1.1
Numpy version:             1.21.4
Numba version:             0.53.1
Scipy version:             1.7.3
The Walrus version:        0.17.0
TensorFlow version:        2.7.0
Torch version:             None

Source code

from mrmustard.lab import Dgate, Attenuator, Vacuum, Coherent
from mrmustard.physics import fidelity
from mrmustard.utils.training import Optimizer

D = Dgate(x = 0.1, y = -0.5, x_trainable=True, y_trainable=True)
L = Attenuator(transmissivity=0.5)

def cost_fn_eucl():
    state_out = Vacuum(1) >> D >> L
    return 1 - fidelity(state_out, Coherent(0.1, 0.5))

# The for loop is the important part. The exact circuit is irrelevant.
for _ in range(10):
    opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
    opt.minimize(cost_fn_eucl, by_optimizing=[D])

Tracebacks

No response

Additional information

You need to try to stop the code while the for loop is running.

Tensor flow causing an error during optimization

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Not fail.

Actual behavior

Failure

Reproduces how often

100% of the time.

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2018-2021 Xanadu Quantum Technologies Inc.

Python version:            3.9.7
Platform info:             Linux-5.17.5-76051705-generic-x86_64-with-glibc2.34
Installation path:         None of your business
Mr Mustard version:        0.2.0-dev
Numpy version:             1.21.5
Numba version:             0.55.1
Scipy version:             1.8.0
The Walrus version:        0.20.0-dev
TensorFlow version:        2.6.2
Torch version:             None

Source code

import numpy as np
from mrmustard.lab import *
import tensorflow as tf
from mrmustard.math import Math
math = Math()
from mrmustard.utils.training import Optimizer
#Target cat state: Normalized(|alpha> - |-alpha>)
alpha = 2.0
cutoff = 50
cat_amps = Coherent(alpha).ket([cutoff]) - Coherent(-alpha).ket([cutoff])
cat_amps = cat_amps / np.linalg.norm(cat_amps)
cat = State(ket=cat_amps)
cat_ket = cat.ket(cutoffs = [cutoff]).numpy()
cat_ket /= np.linalg.norm(cat_ket)
def cost_fn_cat():
    ket = output().ket(cutoffs=[cutoff])
    return -math.abs(math.sum(math.conj(cat_ket) * ket))**2

np.random.seed(21)
S = Sgate(r=[np.random.uniform(0,3.14),np.random.uniform(0,3.14)],phi=[np.random.uniform(0,3.14), np.random.uniform(0,3.14)],r_trainable=True, phi_trainable=True)
B = BSgate(theta=np.random.uniform(0,3.14), phi=np.random.uniform(0,3.14), theta_trainable=True, phi_trainable=True)
#Full train circuit
def output():
    return Vacuum(2) >> S >> B << Fock(3, modes =[0], normalize=True)
opt = Optimizer(euclidean_lr = 0.001)
opt.minimize(cost_fn_cat, by_optimizing=[S,B], max_steps=2000)

Tracebacks

2022-11-02 15:05:24.361254: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-11-02 15:05:24.361270: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-11-02 15:05:26.024104: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-11-02 15:05:26.024131: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-11-02 15:05:26.024145: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (pop-os): /proc/driver/nvidia/version does not exist
2022-11-02 15:05:26.025019: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

Step 106/2000 | 39.8 it/s โ”โ•ธโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”   5% Cost = -0.03122 | โณ  0:00:48

2022-11-02 15:05:30.483723: W tensorflow/core/framework/op_kernel.cc:1692] OP_REQUIRES failed at strided_slice_op.cc:108 : Invalid argument: slice index 3 of dimension 0 out of bounds.

Step 106/2000 | 39.8 it/s โ”โ•ธโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”   5% Cost = -0.03122 | โณ  0:00:48

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/tmp/ipykernel_362528/3454949026.py in <module>
     24     return Vacuum(2) >> S >> B << Fock(3, modes =[0], normalize=True)
     25 opt = Optimizer(euclidean_lr = 0.001)
---> 26 opt.minimize(cost_fn_cat, by_optimizing=[S,B], max_steps=2000)

~/Code/MrMustard/mrmustard/utils/training.py in minimize(self, cost_fn, by_optimizing, max_steps)
     72             with bar:
     73                 while not self.should_stop(max_steps):
---> 74                     cost, grads = math.value_and_gradients(cost_fn, params)
     75                     update_symplectic(params["symplectic"], grads["symplectic"], self.symplectic_lr)
     76                     update_orthogonal(params["orthogonal"], grads["orthogonal"], self.orthogonal_lr)

~/Code/MrMustard/mrmustard/math/tensorflow.py in value_and_gradients(self, cost_fn, parameters)
    315         """
    316         with tf.GradientTape() as tape:
--> 317             loss = cost_fn()
    318         gradients = tape.gradient(loss, list(parameters.values()))
    319         return loss, dict(zip(parameters.keys(), gradients))

/tmp/ipykernel_362528/3454949026.py in cost_fn_cat()
     14 cat_ket /= np.linalg.norm(cat_ket)
     15 def cost_fn_cat():
---> 16     ket = output().ket(cutoffs=[cutoff])
     17     return -math.abs(math.sum(math.conj(cat_ket) * ket))**2
     18 

/tmp/ipykernel_362528/3454949026.py in output()
     22 #Full train circuit
     23 def output():
---> 24     return Vacuum(2) >> S >> B << Fock(3, modes =[0], normalize=True)
     25 opt = Optimizer(euclidean_lr = 0.001)
     26 opt.minimize(cost_fn_cat, by_optimizing=[S,B], max_steps=2000)

~/Code/MrMustard/mrmustard/lab/abstract/state.py in __lshift__(self, other)
    471         E.g., ``self << other`` where other is a ``State`` and ``self`` is either a ``State`` or a ``Transformation``.
    472         """
--> 473         return other.primal(self)
    474 
    475     def __add__(self, other: State):

~/Code/MrMustard/mrmustard/lab/abstract/state.py in primal(self, other)
    356             ]
    357             try:
--> 358                 out_fock = self._preferred_projection(other, other.indices(self.modes))
    359             except AttributeError:
    360                 # matching other's cutoffs

~/Code/MrMustard/mrmustard/lab/states.py in _preferred_projection(self, other, mode_indices)
    500             else:
    501                 getitem.append(slice(None))
--> 502         output = other.fock[tuple(getitem)] if other.is_pure else other.fock[tuple(getitem) * 2]
    503         if self._normalize:
    504             return fock.normalize(output, is_dm=other.is_mixed)

~/miniconda3/envs/mm/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
    204     """Call target, and fall back on dispatchers if there is a TypeError."""
    205     try:
--> 206       return target(*args, **kwargs)
    207     except (TypeError, ValueError):
    208       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/miniconda3/envs/mm/lib/python3.9/site-packages/tensorflow/python/ops/array_ops.py in _slice_helper(tensor, slice_spec, var)
   1039       var_empty = constant([], dtype=dtypes.int32)
   1040       packed_begin = packed_end = packed_strides = var_empty
-> 1041     return strided_slice(
   1042         tensor,
   1043         packed_begin,

~/miniconda3/envs/mm/lib/python3.9/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
    204     """Call target, and fall back on dispatchers if there is a TypeError."""
    205     try:
--> 206       return target(*args, **kwargs)
    207     except (TypeError, ValueError):
    208       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/miniconda3/envs/mm/lib/python3.9/site-packages/tensorflow/python/ops/array_ops.py in strided_slice(input_, begin, end, strides, begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask, var, name)
   1212     strides = ones_like(begin)
   1213 
-> 1214   op = gen_array_ops.strided_slice(
   1215       input=input_,
   1216       begin=begin,

~/miniconda3/envs/mm/lib/python3.9/site-packages/tensorflow/python/ops/gen_array_ops.py in strided_slice(input, begin, end, strides, begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask, name)
  10509       return _result
  10510     except _core._NotOkStatusException as e:
> 10511       _ops.raise_from_not_ok_status(e, name)
  10512     except _core._FallbackException:
  10513       pass

~/miniconda3/envs/mm/lib/python3.9/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
   6939   message = e.message + (" name: " + name if name is not None else "")
   6940   # pylint: disable=protected-access
-> 6941   six.raise_from(core._status_to_exception(e.code, message), None)
   6942   # pylint: enable=protected-access
   6943 

~/miniconda3/envs/mm/lib/python3.9/site-packages/six.py in raise_from(value, from_value)

InvalidArgumentError: slice index 3 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/

Additional information

No response

test issue

Before posting a feature request

  • I have searched exisisting GitHub issues to make sure the feature request does not already exist.

Feature details

this is just a test

Implementation

No response

How important would you say this feature is?

1: Not important. Would be nice to have.

Additional information

No response

Attenuator not being applied to state

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

If I apply an Attenuator to a state, I would expect the output state to be mixed and to have had loss applied to it.

Actual behavior

The state does not seem to be affected by the Attenuator.

Reproduces how often

Always.

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2018-2021 Xanadu Quantum Technologies Inc.

Python version:            3.10.4
Mr Mustard version:        0.3.0-dev
Numpy version:             1.21.6
Numba version:             0.55.1
Scipy version:             1.8.0
The Walrus version:        0.19.0
TensorFlow version:        2.9.1
Torch version:             None

Source code

state = Vacuum(2) >> Sgate(1.15,modes=[0,1]) >> Rgate(np.pi/2,modes=[1]) >> BSgate(np.pi/4,modes=[0,1]) << Fock(4)
state = state >> Attenuator(0.5, modes=[0])
state.is_pure ##returns True but should be False

Tracebacks

No response

Additional information

No response

Weird behaviour of marginal states

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Combining states obtained via .get_modes should just work. E.g.

settings.AUTOCUTOFF_MIN_CUTOFF = 10
out1 = (Fock(1) & Vacuum(1)) >> BSgate(1.0)
(Vacuum(1) & out1.get_modes(1)[0]) >> BSgate(1.0)

should yield a valid state.

Actual behavior

We get an unphysical state

Reproduces how often

100%

System information

Python version:            3.9.7
Platform info:             macOS-12.0.1-arm64-i386-64bit
Installation path:         /Users/filippo/Dropbox/Work/Xanadu/projects/MrMustard/mrmustard
Mr Mustard version:        0.3.0-dev
Numpy version:             1.21.4
Numba version:             0.53.1
Scipy version:             1.7.3
The Walrus version:        0.20.0-dev
TensorFlow version:        2.6.2
Torch version:             1.10.0

Source code

No response

Tracebacks

No response

Additional information

No response

All predefined states doesn't take the modes as the parameters

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

When genre predefined states, the modes information needs to be added into the init function of State.

Actual behavior

No modes information is taken

Reproduces how often

Each time when construct the circuit related to the label of modes.

System information

Not relevant

Source code

No response

Tracebacks

No response

Additional information

No response

Wigner plotting unstable

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Expect Wigner plots to display correctly, since it is possible in qutip.

Actual behavior

Wigner plots do not display correctly. Likely some instability for high Fock cutoff.

Reproduces how often

Every time for high Fock cutoff.

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2021 Xanadu Quantum Technologies Inc.

Python version:            3.10.11
Platform info:             Windows-10-10.0.19045-SP0
Installation path:         N/A
Mr Mustard version:        0.5.0-dev
Numpy version:             1.23.5
Numba version:             0.56.4
Scipy version:             1.10.1
The Walrus version:        0.20.0
TensorFlow version:        2.10.0

Source code

from mrmustard.utils.wigner import wigner_discretized
import numpy as np
import qutip
import matplotlib.pyplot as plt


def compare_wigner(alpha):
    state = qutip.coherent_dm(cutoff, alpha)
    W_mm, Q, P = wigner_discretized(state.full(), qvec, pvec, hbar=1)
    W_qt = qutip.wigner(state, qvec, pvec)

    plt.figure()
    plt.contourf(Q, P, W_mm, 128)
    plt.title(f"mr mustard, $\\alpha={alpha}$")

    plt.figure()
    plt.contourf(qvec, pvec, np.real(W_qt), 128)
    plt.title(f"qutip, $\\alpha={alpha}$")


cutoff = 100
qvec = np.linspace(-10, 10, 200)
pvec = np.linspace(-10, 10, 200)

compare_wigner(alpha=6)
compare_wigner(alpha=7)

Tracebacks

No response

Additional information

No response

Consistent cutoff values for states on fock representation

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Mr Mustard automatically determines the same fixed cutoff value for all states or it keeps the value set with settings.AUTOCUTOFF_MIN_CUTOFF and settings.AUTOCUTOFF_MAX_CUTOFF.

Actual behavior

States have different matrix size due to different cutoff values hence raising errors when performing math operations between states.

Reproduces how often

Sometimes when cutoff values are not explicitly defined.

System information

Not relevant.

Source code

import numpy as np
from mrmustard.lab import Fock, Attenuator, Sgate, BSgate, Vacuum
from mrmustard.physics import fidelity, normalize
from mrmustard.utils.training import Optimizer
from mrmustard import settings

settings.AUTOCUTOFF_MIN_CUTOFF = 5
settings.AUTOCUTOFF_MAX_CUTOFF = 5

S = Sgate(r=1,r_trainable=True,r_bounds=(0,2))
BS = BSgate(theta=np.pi/3,phi=np.pi/4,theta_trainable=True,phi_trainable=True)[0,1]

def cost_fn_both_mixed():
    state_out = Vacuum(2) >> S >> BS >> Attenuator(0.9)[0,1] << Fock([2], [1])
    return 1 - fidelity(normalize(state_out), Fock([2],[0]) >> Attenuator(0.9))

opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
opt.minimize(cost_fn_both_mixed, by_optimizing=[S,BS])

Tracebacks

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
Input In [1], in <module>
     15     return 1 - fidelity(normalize(state_out), Fock([2],[0]) >> Attenuator(0.9))
     17 opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
---> 18 opt.minimize(cost_fn_both_mixed, by_optimizing=[S,BS])

File ~/xanadu/MrMustard/mrmustard/utils/training.py:74, in Optimizer.minimize(self, cost_fn, by_optimizing, max_steps)
     72 with bar:
     73     while not self.should_stop(max_steps):
---> 74         cost, grads = math.value_and_gradients(cost_fn, params)
     75         update_symplectic(params["symplectic"], grads["symplectic"], self.symplectic_lr)
     76         update_orthogonal(params["orthogonal"], grads["orthogonal"], self.orthogonal_lr)

File ~/xanadu/MrMustard/mrmustard/math/tensorflow.py:317, in TFMath.value_and_gradients(self, cost_fn, parameters)
    306 r"""Computes the loss and gradients of the given cost function.
    307 
    308 Args:
   (...)
    314     The loss and the gradients.
    315 """
    316 with tf.GradientTape() as tape:
--> 317     loss = cost_fn()
    318 gradients = tape.gradient(loss, list(parameters.values()))
    319 return loss, dict(zip(parameters.keys(), gradients))

Input In [1], in cost_fn_both_mixed()
     13 def cost_fn_both_mixed():
     14     state_out = Vacuum(2) >> S >> BS >> Attenuator(0.9)[0,1] << Fock([2], [1])
---> 15     return 1 - fidelity(normalize(state_out), Fock([2],[0]) >> Attenuator(0.9))

File ~/xanadu/MrMustard/mrmustard/physics/__init__.py:39, in fidelity(A, B)
     37 if A.is_gaussian and B.is_gaussian:
     38     return gaussian.fidelity(A.means, A.cov, B.means, B.cov, settings.HBAR)
---> 39 return fock.fidelity(A.fock, B.fock, a_ket=A._ket is not None, b_ket=B._ket is not None)

File ~/xanadu/MrMustard/mrmustard/physics/fock.py:254, in fidelity(state_a, state_b, a_ket, b_ket)
    246     return math.real(
    247         math.sum(math.conj(b) * math.matvec(math.reshape(state_a, (len(b), len(b))), b))
    248     )
    250 # mixed state
    251 # Richard Jozsa (1994) Fidelity for Mixed Quantum States, Journal of Modern Optics, 41:12, 2315-2323, DOI: 10.1080/09500349414552171
    252 return (
    253     math.trace(
--> 254         math.sqrtm(math.matmul(math.matmul(math.sqrtm(state_a), state_b), math.sqrtm(state_a)))
    255     )
    256     ** 2
    257 )

File ~/xanadu/MrMustard/mrmustard/math/autocast.py:68, in Autocast.__call__.<locals>.wrapper(backend, *args, **kwargs)
     65 @wraps(func)
     66 def wrapper(backend, *args, **kwargs):
     67     args, kwargs = self.cast_all(backend, *args, **kwargs)
---> 68     return func(backend, *args, **kwargs)

File ~/xanadu/MrMustard/mrmustard/math/tensorflow.py:181, in TFMath.matmul(self, a, b, transpose_a, transpose_b, adjoint_a, adjoint_b)
    171 @Autocast()
    172 def matmul(
    173     self,
   (...)
    179     adjoint_b=False,
    180 ) -> tf.Tensor:
--> 181     return tf.linalg.matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b)

File ~/xanadu/MrMustard/venv/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:206, in add_dispatch_support.<locals>.wrapper(*args, **kwargs)
    204 """Call target, and fall back on dispatchers if there is a TypeError."""
    205 try:
--> 206   return target(*args, **kwargs)
    207 except (TypeError, ValueError):
    208   # Note: convert_to_eager_tensor currently raises a ValueError, not a
    209   # TypeError, when given unexpected types.  So we need to catch both.
    210   result = dispatch(wrapper, args, kwargs)

File ~/xanadu/MrMustard/venv/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py:3654, in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, output_type, name)
   3651   return gen_math_ops.batch_mat_mul_v3(
   3652       a, b, adj_x=adjoint_a, adj_y=adjoint_b, Tout=output_type, name=name)
   3653 else:
-> 3654   return gen_math_ops.mat_mul(
   3655       a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)

File ~/xanadu/MrMustard/venv/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py:5696, in mat_mul(a, b, transpose_a, transpose_b, name)
   5694   return _result
   5695 except _core._NotOkStatusException as e:
-> 5696   _ops.raise_from_not_ok_status(e, name)
   5697 except _core._FallbackException:
   5698   pass

File ~/xanadu/MrMustard/venv/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:6941, in raise_from_not_ok_status(e, name)
   6939 message = e.message + (" name: " + name if name is not None else "")
   6940 # pylint: disable=protected-access
-> 6941 six.raise_from(core._status_to_exception(e.code, message), None)

File <string>:3, in raise_from(value, from_value)

InvalidArgumentError: In[0] mismatch In[1] shape: 5 vs. 3: [5,5] [3,3] 0 0 [Op:MatMul]

Additional information

Notice the following code works correctly by explicitly defining the cutoff values and making sure they agree with settings.AUTOCUTOFF_MIN_CUTOFF and settings.AUTOCUTOFF_MAX_CUTOFF

import numpy as np
from mrmustard.lab import Fock, Attenuator, Sgate, BSgate, Vacuum
from mrmustard.physics import fidelity, normalize
from mrmustard.utils.training import Optimizer
from mrmustard import settings

settings.AUTOCUTOFF_MIN_CUTOFF = 5
settings.AUTOCUTOFF_MAX_CUTOFF = 5

S = Sgate(r=1,r_trainable=True,r_bounds=(0,2))
BS = BSgate(theta=np.pi/3,phi=np.pi/4,theta_trainable=True,phi_trainable=True)[0,1]

def cost_fn_both_mixed():
    state_out = Vacuum(2) >> S >> BS >> Attenuator(0.9)[0,1] << Fock([2], [1], cutoffs = [5])
    return 1 - fidelity(normalize(state_out), Fock([2],[0], cutoffs = [5]) >> Attenuator(0.9))

opt = Optimizer(symplectic_lr=0.1, euclidean_lr=0.01)
opt.minimize(cost_fn_both_mixed, by_optimizing=[S,BS])

Make Mrmustard independent of SF

Currently MrMustard depends on Strawberry Fields. The dependency is only for using the Wigner function plotting capabilities of SF, which are in turn just borrowed from qutip. It would be nice to remove this very onerous requirement and make MrM independent of SF. Ideally one can simply take this method

https://github.com/XanaduAI/strawberryfields/blob/2c27f1c1ebe4bc53866dbe0420c9a6bafda5aeb4/strawberryfields/backends/states.py#L725

and turn it into a standalone function in the utils/graphics.py module.

This would simplify the installation of MrM and since both MrM and SF might require different versions of TF avoid potentially dependency entanglements.

Incorrect handling of mixed Gaussian states

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Given a Gaussian state one expects that state.is_pure == state.is_hilbert_vector. However, the implementation of

  • is_hilbert_vector uses np.allclose with default value atol value, whereas
  • is_pure uses np.isclose with non-default value atol = 1e-6,

consequently allowing behavior where the state is considered both a vector in specific contexts and a density matrix at the same time.

Actual behavior

One incorrectly observes state.is_pure != state.is_hilbert_vector.

Reproduces how often

Whenever np.isclose(purity, 1.0, atol = 1e-6) != np.isclose(purity, 1.0).
For example if purity == 0.9999899.

System information

...

Source code

fail_state = (
    mrmustard.lab.TMSV(r = 0.0115, modes = [ 0, 1 ])
    >> mrmustard.lab.Attenuator(transmissivity = 0.96)[1]
)
assert(fail_state.is_pure == fail_state.is_hilbert_vector)

Tracebacks

No response

Additional information

No response

Fidelity between two mixed states

Before posting a feature request

  • I have searched exisisting GitHub issues to make sure the feature request does not already exist.

Feature details

Compute the fidelity between two mixed states.

Implementation

I tried using
trace( sqrtm( matmul( matmul( sqrtm(rho1), rho2), sqrtm(rho1) ) ) )**2
but was hoping for something with a faster implementation.

How important would you say this feature is?

2: Somewhat important. Needed this quarter.

Additional information

No response

NotImplementedError: Function ``convolution`` not implemented for backend ``numpy``

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Hello,
I have recently started using MrMustard and I tried to run this simple piece of code from the website to check if I installed it correctly. I attached a text file that contains all the dependencies of the python environment I have been using. To set up the environment, I just ran the following commands in the anaconda prompt:

conda create -n mrmustardtest python=3.9
conda activate mrmustardtest
pip install git+https://github.com/XanaduAI/MrMustard.git

Actual behavior

Error message

Reproduces how often

Every time

System information

name: mrmustardtest
channels:
  - defaults
dependencies:
  - ca-certificates=2023.12.12=haa95532_0
  - openssl=3.0.12=h2bbff1b_0
  - pip=23.3.1=py39haa95532_0
  - python=3.9.18=h1aa4202_0
  - setuptools=68.2.2=py39haa95532_0
  - sqlite=3.41.2=h2bbff1b_0
  - tzdata=2023d=h04d1e81_0
  - vc=14.2=h21ff451_1
  - vs2015_runtime=14.27.29016=h5e58377_2
  - wheel=0.41.2=py39haa95532_0
  - pip:
      - absl-py==2.0.0
      - astunparse==1.6.3
      - cachetools==5.3.2
      - certifi==2023.11.17
      - charset-normalizer==3.3.2
      - click==8.1.7
      - cloudpickle==3.0.0
      - colorama==0.4.6
      - commonmark==0.9.1
      - cycler==0.12.1
      - dask==2024.1.0
      - decorator==5.1.1
      - dm-tree==0.1.8
      - flatbuffers==23.5.26
      - fonttools==4.47.2
      - fsspec==2023.12.2
      - gast==0.5.4
      - google-auth==2.26.2
      - google-auth-oauthlib==1.2.0
      - google-pasta==0.2.0
      - grpcio==1.60.0
      - h5py==3.10.0
      - idna==3.6
      - importlib-metadata==7.0.1
      - julia==0.6.1
      - keras==2.15.0
      - kiwisolver==1.4.5
      - libclang==16.0.6
      - llvmlite==0.39.1
      - locket==1.0.0
      - markdown==3.5.2
      - markupsafe==2.1.3
      - matplotlib==3.5.0
      - ml-dtypes==0.2.0
      - mpmath==1.3.0
      - mrmustard==0.7.0.dev0
      - networkx==3.2.1
      - numba==0.56.4
      - numpy==1.23.5
      - oauthlib==3.2.2
      - opt-einsum==3.3.0
      - packaging==23.2
      - partd==1.4.1
      - pillow==10.2.0
      - protobuf==4.23.4
      - pyasn1==0.5.1
      - pyasn1-modules==0.3.0
      - pygments==2.17.2
      - pyparsing==3.1.1
      - python-dateutil==2.8.2
      - pyyaml==6.0.1
      - requests==2.31.0
      - requests-oauthlib==1.3.1
      - rich==10.15.1
      - rsa==4.9
      - scipy==1.8.0
      - setuptools-scm==8.0.4
      - six==1.16.0
      - sympy==1.12
      - tensorboard==2.15.1
      - tensorboard-data-server==0.7.2
      - tensorflow==2.15.0
      - tensorflow-estimator==2.15.0
      - tensorflow-intel==2.15.0
      - tensorflow-io-gcs-filesystem==0.31.0
      - tensorflow-probability==0.22.1
      - termcolor==2.4.0
      - thewalrus==0.19.0
      - tomli==2.0.1
      - toolz==0.12.0
      - tqdm==4.66.1
      - typing-extensions==4.9.0
      - urllib3==2.1.0
      - werkzeug==3.0.1
      - wrapt==1.14.1
      - zipp==3.17.0
prefix: C:\Users\em1120\AppData\Local\anaconda3\envs\mrmustardtest

Source code

from mrmustard.lab import *
results = Gaussian(3) << PNRDetector(efficiency = [0.9, 0.8], modes = [0,1])
print(results[2,3])

Tracebacks

Traceback (most recent call last):
  File "C:\Users\em1120\AppData\Local\anaconda3\envs\mrmustardtest\lib\site-packages\mrmustard\math\backend_manager.py", line 105, in _apply
    attr = getattr(self.backend, fn)
AttributeError: 'BackendNumpy' object has no attribute 'convolution'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "c:\Users\em1120\qkdlab\Simulations\MrMustardSimulation\QKDProtocol.py", line 48, in <module>
    results = Gaussian(3) << PNRDetector(efficiency = [0.9, 0.8], modes = [0,1])
  File "C:\Users\em1120\AppData\Local\anaconda3\envs\mrmustardtest\lib\site-packages\mrmustard\lab\detectors.py", line 97, in __init__
    self.recompute_stochastic_channel()
  File "C:\Users\em1120\AppData\Local\anaconda3\envs\mrmustardtest\lib\site-packages\mrmustard\lab\detectors.py", line 126, in recompute_stochastic_channel
    math.convolve_probs_1d(
  File "C:\Users\em1120\AppData\Local\anaconda3\envs\mrmustardtest\lib\site-packages\mrmustard\math\backend_manager.py", line 1455, in convolve_probs_1d
    return self.convolve_probs(prob, q)
  File "C:\Users\em1120\AppData\Local\anaconda3\envs\mrmustardtest\lib\site-packages\mrmustard\math\backend_manager.py", line 1470, in convolve_probs
    return self.convolution(
  File "C:\Users\em1120\AppData\Local\anaconda3\envs\mrmustardtest\lib\site-packages\mrmustard\math\backend_manager.py", line 430, in convolution
    return self._apply("convolution", (array, filters, padding, data_format))
  File "C:\Users\em1120\AppData\Local\anaconda3\envs\mrmustardtest\lib\site-packages\mrmustard\math\backend_manager.py", line 109, in _apply
    raise NotImplementedError(msg)
NotImplementedError: Function ``convolution`` not implemented for backend ``numpy``.

Additional information

No response

Autocutoffs not being respected

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

I expect the cutoffs to be [10,10] in the first print statement, but they come out as [3,2].

This is obviously a problem when the BSgate is applied, since it keeps those same cutoffs.

from mrmustard import settings
from mrmustard.lab import *

settings.AUTOCUTOFF_MIN_CUTOFF = 10

print((Fock(2)&Fock(1)).cutoffs)

print(((Fock(2)&Fock(1)) >> BSgate(np.pi/4)).cutoffs)

Actual behavior

See above.

Reproduces how often

Every time.

System information

Mr Mustard: a differentiable bridge between phase space and Fock space.
Copyright 2021 Xanadu Quantum Technologies Inc.

Python version:            3.10.11
Platform info:             Windows-10-10.0.19045-SP0
Installation path:         N/A
Mr Mustard version:        0.5.0-dev
Numpy version:             1.23.5
Numba version:             0.56.4
Scipy version:             1.10.1
The Walrus version:        0.20.0
TensorFlow version:        2.10.0

Source code

No response

Tracebacks

No response

Additional information

No response

Homodyne and heterodyne detectors not accepting modes in kwargs

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Homodyne(modes=[m],...) should apply a homodyne detector to mode m. Similarly for Heterodyne.

Actual behavior

It applies it to mode 0.

Reproduces how often

Homodyne(modes=[m],...) always applies it to mode 0.
Note that Homodyne(...)[m] works correctly, applying it to mode m.
Similarly for Heterodyne.

System information

Python version:            3.8.0
Platform info:             Windows-10-10.0.19041-SP0
Installation path:         C:\Users\Rache\AppData\Local\Programs\Python\Python38\lib\site-packages\mrmustard
Mr Mustard version:        0.1.1
Numpy version:             1.19.2
Numba version:             0.53.1
Scipy version:             1.4.1
The Walrus version:        0.17.0
TensorFlow version:        2.7.0
Torch version:             None

Source code

from mrmustard.lab import *
import numpy as np
from mrmustard.utils.graphics import mikkel_plot
import matplotlib.pyplot as plt

# Initial state is two modes with "diagonal" (angle=pi/2) squeezed state in mode 0
# and "vertical" (angle=0) squeezed state in mode 1
S1 = Sgate(modes=[0], r=1, phi=np.pi/2)
S2 = Sgate(modes=[1], r=1, phi=0)
initial_state = Vacuum(2) >> S1 >> S2

# Because the modes are separable, measuring in one mode should leave
# the state as simply the state in the unmeasured mode unchanged

# A homodyne measurement on the second mode should leave
# the diagonal state in the first mode but doesn't
final_state = initial_state << Homodyne(modes=[1],quadrature_angle=0,result=[0.3])
mikkel_plot(np.array(final_state.dm()))

# But coded like this does work
final_state = initial_state << Homodyne(quadrature_angle=0,result=[0.3])[1]
mikkel_plot(np.array(final_state.dm()))

plt.show()

Tracebacks

No response

Additional information

Proposed fix from @ilan-tz:

`Homodyne` initializes `DisplacedSqueezed` which initializes `State`.
Line 370 in states.py currently is 
`State.__init__(self, cov=cov, means=means, cutoffs=cutoffs)` 
but should be
`State.__init__(self, cov=cov, means=means, cutoffs=cutoffs, modes=modes)`

Similarly for `Heterodyne`, line 112 in `Coherent` should have the modes kwarg added.

In fact, in general the modes of the states do not seem to depend on the kwarg `modes` but just the length of the list of parameters given.

Set cutoff based on contraction

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

For objects in lab_dev, when contracting a Bargmann object with a Fock object, the Bargmann object should set its cutoff based on the Fock object.

Actual behavior

The Bargmann object takes on the autocutoff.

Reproduces how often

Every time.

System information

N/A

Source code

No response

Tracebacks

No response

Additional information

No response

Number of modes supplied (...) must match the representation dimension ...

Before posting a bug report

  • I have searched exisisting GitHub issues to make sure the issue does not already exist.

Expected behavior

Once expects a projection onto a Fock state to work.

Actual behavior

One is met with

AssertionError: Number of modes supplied (...) must match the representation dimension ...

Reproduces how often

Whenever the bug #292 gets triggered.

System information

...

Source code

fail_state = (
    mrmustard.lab.TMSV(r = 0.0115, modes = [ 0, 1 ])
    >> mrmustard.lab.Attenuator(transmissivity = 0.96, modes = [ 1 ])
)
fail_state << mrmustard.lab.Fock(1, modes = [ 1 ])

Tracebacks

No response

Additional information

`Fock` implements [_preferred_projection](https://github.com/XanaduAI/MrMustard/blob/develop/mrmustard/lab/states.py#L493) and [_contract_with_other](https://github.com/XanaduAI/MrMustard/blob/develop/mrmustard/lab/abstract/state.py#L424) takes [advantage of it](https://github.com/XanaduAI/MrMustard/blob/develop/mrmustard/lab/abstract/state.py#L429). When performing [_project_onto_fock](https://github.com/XanaduAI/MrMustard/blob/develop/mrmustard/lab/abstract/state.py#L395), the marginal is computed through [_preferred_projection](https://github.com/XanaduAI/MrMustard/blob/develop/mrmustard/lab/states.py#L516), which returns either a vector or a density matrix depending on the value of `other.is_hilbert_vector`. The form of the resulting state [is determined](https://github.com/XanaduAI/MrMustard/blob/develop/mrmustard/lab/abstract/state.py#L412) through `other.is_pure`. As per the referenced bug https://github.com/XanaduAI/MrMustard/issues/292, these two properties are not equal.

Take advantage of known techniques to speed up simulations of circuits with Attenuators

Before posting a feature request

  • I have searched exisisting GitHub issues to make sure the feature request does not already exist.

Feature details

For the Attenuator/Amplifier channels in the Fock basis, the Kraus operators are analytically known and diagonal in the Fock basis. It would be nice if they were used when applying those channels to Fock basis states instead of the current default method of having to compute the Choi state for the channel and converting it to the Fock basis. It's in a similar spirit to how the displacement gate, squeezer, beamsplitter, etc. have custom Fock implementations for speed up/accuracy. It would fit very naturally with the new tensor contraction methods being implemented as well!

Implementation

No response

How important would you say this feature is?

2: Somewhat important. Needed this quarter.

Additional information

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.