Git Product home page Git Product logo

uncertainties's Introduction

uncertainties

https://readthedocs.org/projects/uncertainties/badge/?version=latest https://pepy.tech/badge/uncertainties/week https://img.shields.io/github/actions/workflow/status/lmfit/uncertainties/python-package.yml?logo=github%20actions

The uncertainties package allows calculations with values that have uncertaintes, such as (2 +/- 0.1)*2 = 4 +/- 0.2. uncertainties takes the pain and complexity out of error propagation and calculations of values with uncertainties. For more information, see https://uncertainties.readthedocs.io/

Basic examples

>>> from uncertainties import ufloat
>>> x = ufloat(2, 0.25)
>>> x
2.0+/-0.25

>>> square = x**2
>>> square
4.0+/-1.0
>>> square.nominal_value
4.0
>>> square.std_dev  # Standard deviation
1.0

>>> square - x*x
0.0  # Exactly 0: correlations taken into account

>>> from uncertainties.umath import sin, cos  # and many more.
>>> sin(1+x**2)
-0.95892427466313845+/-0.2836621854632263

>>> print (2*x+1000).derivatives[x]  # Automatic calculation of derivatives
2.0

>>> from uncertainties import unumpy  # Array manipulation
>>> varr = unumpy.uarray([1, 2], [0.1, 0.2])
>>> print(varr)
[1.0+/-0.1 2.0+/-0.2]
>>> print(varr.mean())
1.50+/-0.11
>>> print(unumpy.cos(varr))
[0.540302305868+/-0.0841470984808 -0.416146836547+/-0.181859485365]

Main features

  • Transparent calculations with uncertainties: Liittle or no modification of existing code is needed to convert calculations of floats to calculations of values with uncertainties.
  • Correlations between expressions are correctly taken into account. Thus, x-x is exactly zero.
  • Most mathematical operations are supported, including most functions from the standard math module (sin,...). Comparison operators (>, ==, etc.) are supported too.
  • Many fast operations on arrays and matrices of numbers with uncertainties are supported.
  • Extensive support for printing numbers with uncertainties (including LaTeX support and pretty-printing).
  • Most uncertainty calculations are performed analytically.
  • This module also gives access to the derivatives of any mathematical expression (they are used by error propagation theory, and are thus automatically calculated by this module).

Installation or upgrade

To install uncertainties, use:

pip install uncertainties

To upgrade from an older version, use:

pip install --upgrade uncertainties

Further details are in the on-line documentation.

Git branches

The GitHub master branch is the latest development version, and is intended to be a stable pre-release version. It will be experimental, but should pass all tests.. Tagged releases will be available on GitHub, and correspond to the releases to PyPI. The GitHub gh-pages branch will contain a stable test version of the documentation that can be viewed at https://lmfit.github.io/uncertainties/. Other Github branches should be treated as unstable and in-progress development branches.

License

This package and its documentation are released under the Revised BSD License.

History

This package was created back around 2009 by Eric O. LEBIGOT.

Ownership of the package was taken over by the lmfit GitHub organization in 2024.

uncertainties's People

Contributors

adityasavara avatar andrewgsavage avatar baldyeagle avatar benabel avatar cdeil avatar chrisburr avatar clade avatar eendebakpt avatar ep12 avatar jagerber48 avatar lebigot avatar mindw avatar mjpieters avatar newville avatar op3 avatar paulromano avatar rth avatar willynilly avatar wshanks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

uncertainties's Issues

additional uarray functionality

I often have several ufloat variables that I'd like to insert into a numpy array and then manipulate them with the unumpy functionality. For example

from numpy import array
from uncertainties import ufloat
a = ufloat((1., .1))
b = ufloat((2., .1))
c = array([a, b], dtype=object)

It would be nice not to have to type ", dtype=object" everytime. The equivalent functionality with floats and numpy arrays is that you don't have to declare the dtype for floats. Alternatively, making the array c with unumpy.uarray is clumsy because you have to declare the nominal values and uncertainties separetly. It would be nice to be able to type this:

from uncertainties import unumpy
from uncertainties import ufloat
a = ufloat((1., .1))
b = ufloat((2., .1))
c = unumpy.uarray([a, b])

Error in only one direction

I should preface this by saying that this is not really an issue report but rather a feature request.

I have a value for a conversion in a chemical reaction. The conversion is measured to be 100% which is the maximum theoretically possible. If I have 100% conversion I have used up all my starting material in the reaction and I obviously can’t use any more as it is all gone thus I can only feasibly have an error in the negative direction i.e. 1 (-0.05) instead of 1 (+-0.05). How would I represent that in uncertainties? I fear I haven’t found a method yet (or missed it) though I have come across this article saying it might not be possible (https://newton.cx/~peter/2013/04/propagating-uncertainties-the-lazy-and-absurd-way/ ) and this github conversation on the matter (#25 ).

wrapped function returns NotImplemented

import numpy as np
import uncertainties as un
from uncertainties import unumpy

print "Numpy version:", np.__version__
print "Uncertainties version:", un.__version__

def rotate_inertia_tensor(I, angle):
    '''Returns inertia tensor rotated through angle about the Y axis.

    Parameters
    ----------
    I : ndarray, shape(3,)
        An inertia tensor.
    angle : float
        Angle in radians about the positive Y axis of which to rotate the
        inertia tensor.

    '''
    ca = np.cos(angle)
    sa = np.sin(angle)
    C = np.matrix([[ca, 0., -sa],
                   [0., 1., 0.],
                   [sa, 0., ca]])
    return C * I * C.T

w_rot = un.wrap(rotate_inertia_tensor)

vals = np.ones((3,3))
stds = 0.1 * np.ones((3,3))
I = unumpy.uarray((vals, stds))
angle = un.ufloat((5.,.1))

print w_rot(I, 5.)

print w_rot(I, angle)

This is what is returned for me:

Numpy version: 1.5.1
Uncertainties version: 1.7.3
[[1.54402111089+/-0.1 1.24258646013+/-0.1 -0.839071529076+/-0.1]
 [1.24258646013+/-0.1 1.0+/-0.1 -0.6752620892+/-0.1]
 [-0.839071529076+/-0.1 -0.6752620892+/-0.1 0.455978889111+/-0.1]]
NotImplemented

I'm not sure what is causing the NotImplemented to be returned. I have a bunch of functions that are written for numpy arrays and would love to use them with uarrays without having to rewrite them with umath calls and stuff.

Remove the need for uncertainties.unumpy for NumPy arrays

if I monkeypatch Variable I can use my existing code without modifications:

>>> from uncertainties import Variable, ufloat, umath, unumpy
>>> Variable.sin = umath.sin
>>> a = ufloat(1.23, 0.045)
1.23+/-0.045
>>> b = np.array([a,a,a])
array([1.23+/-0.0345, 1.23+/-0.0345, 1.23+/-0.0345], dtype=object)
>>> np.sin(b)
array([0.9424888019316975+/-0.01153120158579534,
       0.9424888019316975+/-0.01153120158579534,
       0.9424888019316975+/-0.01153120158579534], dtype=object)

although unfortunately it doesn't save anytime over using the looped methods already in unumpy.

Dealing with correlations

Dear lebigot,

I am facing the problem of properly correlating the error sources in different observables (variables). I know that the package is supposed to properly deal with it, but that's only in case the error variable is actually the same.
I am in the unfortunate situation in which this is not the case and the same error source has different inpact on different observables. I was hoping that the tag attribute might do the trick but this example shows it's not the case:

>>> sys = ufloat(1, 0.02, 'sys')
>>> x = 5*sys # a 2% effect on the x observable 
>>> x
5.0+/-0.10000000000000001
>>> y = 10*sys # same error variable, same 2%
>>> y/x #this is handled properly
2.0+/-0
>>> x+y #this too
15.0+/-0.29999999999999999
>>> sys2 = ufloat(1, 0.04, 'sys') 
>>> z = 10*sys2 #but now the error source has a larger inpact on Z
>>> y+z # and unfortunately this in not handled properly (should be 20+/-0.6)
20.0+/-0.44721359549995798

So, how do I fully correlate (positively and negatively) two variables? Of course the problem is a lot trickier than this mock-up exaple, and has some error sources that correlate between variables and others that don't.

Let me know if I did not explain myself properly and of course if you have a solution :).

Thank you

New formatting option with implicit uncertainty?

From a user:

May I suggest you add an additional formatting specifier to "uncertainties"?

In many cases, particularly in tabular presentation of data, one wants
to simply round according to the uncertainty and let the "significant
figures" communicate the implied precision. It is not, of course, as
elegant or as explicit as the shorthand notation, but sometimes it is
desirable.

Then one will have the option of presenting the number (3.1415926,0.03) as

3.14

One possibility would be to use the precision modifier "u" as you
already do, but have it affect the number of digits past the one which
is constrained by the error, i.e. .1u would cause exactly the same as
the data value printed by the existing default format before the +/-,
etc. This preserves consistency with digits displayed across the
various formats. For tunability, they can modify the std_dev. In fact,
another utility function might be useful where you can modify this on
the fly, i.e.:
x.std_mult(2.0) would return a ufloat with the existing std_dev
multiplied by 2, so one could pass such a "temporarily" modified value
into the formatter, e.g.:
'{:.2u}'.format(x.std_mult(2.0))

missing multiplication operator between float and unumpy.matrix in one direction

One direction of the multiplication operator between a float and a unumpy matrix seems to be missing:

>>> from uncertainties import unumpy, ufloat
>>> m = unumpy.matrix([[ufloat((1.0, 0.2))]])
>>> m * 0.5
matrix([[0.5+/-0.1]], dtype=object)
>>> 0.5 * m
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for *: 'float' and 'matrix'

However, this works just fine for a normal numpy matrix:

>>> import numpy
>>> m = numpy.matrix([[1.0]])
>>> m * 0.5
matrix([[ 0.5]])
>>> 0.5 * m
matrix([[ 0.5]])

intersphinx mapping to uncertainties doesn't work

I have a Python package gammapy that is using the intersphinx_mapping sphinx extension to create-cross links e.g. to the numpy, scipy, astropy, ... docs.

For some reason the links to uncertainties don't work, even though I've added http://pythonhosted.org/uncertainties/ to the intersphinx_mapping dict and the file http://pythonhosted.org/uncertainties/objects.inv does exist and is successfullly downloaded by sphinx. I think the problem is that the file doesn't contain the usual information, because it is too small (532 bytes).

@lebigot In case you have no idea what I'm talking about ... here is an example where linking against the scipy docs worked and here is an example where linking against the uncertainties docs doesn't work.

Any idea how to fix this?

(I'm trying to fix this on my end at gammapy/gammapy#83 )

unumpy.isnan() should be useable for NumPy's boolean array indexing

NumPy boolean array indexing does not work when the indexes are obtained from unumpy.isnan():

>>> import numpy as np
>>> from uncertainties import ufloat, unumpy
>>> num_elmt = 5000
>>> x = np.array([ufloat(10,1) for _ in range(num_elmt)])
>>> nan_locs = [int(np.random.uniform(0, num_elmt)) for _ in range(100)]
>>> x[nan_locs] = float('nan')
>>> len(x[~unumpy.isnan(x)])
Traceback (most recent call last):
  File "<ipython-input-8-05bf2007e616>", line 1, in <module>
    len(x[~unumpy.isnan(x)])
IndexError: arrays used as indices must be of integer (or boolean) type

While unumpy.isnan() returns booleans:

>>> set(type(v) for v in unumpy.isnan(x))
set([<type 'bool'>])

the data type of unumpy.isnan(x) is object:

>>> unumpy.isnan(x).dtype
dtype('O')

This is probably the source of the problem.

Gracefully handle negative std_dev values

With uncertainties 2.3.6 (the newest I have available easily), when I write

rngsh_unc   = uncertainties.ufloat(rngsh, -0.03)

I get

fixedpointcalc.py:79: UserWarning: Obsolete: either use ufloat(nominal_value, std_dev), ufloat(nominal_value, std_dev, tag), or the ufloat_fromstr() function, for string representations. Code can be automatically updated with python -m uncertainties.1to2 -w ProgramDirectory.
[...]
  File "[...]/python/2.6/lib/uncertainties-2.3.6-py2.6.egg/uncertainties/__init__.py", line 1949, in _str_to_number_with_uncert
    (value, uncert) = representation.split('+/-')
AttributeError: 'float' object has no attribute 'split'

You may ask why anybody would use negative values for std_dev here. Well, it's because my original code was

rngsh_unc   = uncertainties.ufloat(rngsh,   3.0/100.0*rngsh)

i.e., I want to allow a 3% deviation, and rngsh can be a negative value.
I can of course use abs(rngsh), but IMO it's not that nice.
What do you think?

Initialising empy array

I have found it that I quite often want to initiliase an empty array. If I simply initiliase an empty array or an array of zeros an exception will be thrown once I try to assign ufloat values to it. Would it be worthwhile to include a little routine to automatically do this? This is what I am currently using (probably not the most efficient version but it works):

def unctarray(a,b):
    i=0
    temp = np.array([])
    while i<a*b:
        temp=np.append(temp,np.array([unct.ufloat(0,0)]))
        i+=1
    array = np.reshape(temp,(a,b))
    return(array)

Support for mean in pandas DataFrame of ufloat

Background:
I would like to use the uncertainties package to hold and manipulate the results of Monte Carlo simulations.
At a high level I find a pandas DataFrame to be the most convenient structure, therefore I tried to combine the two packages as in the simple example below.

Is there a way I can already apply mean to different slices of the DataFrame ?
If not, would you please consider adding support for it ?

import pandas as pd
import numpy as np

import uncertainties
from uncertainties import unumpy

value = pd.DataFrame(np.arange(12).reshape(3, 4), index=['r1', 'r2', 'r3'], columns=['c1', 'c2', 'c3', 'c4'])
err = pd.DataFrame(0.01 * np.arange(12).reshape(3, 4) + 0.005, index=['r1', 'r2', 'r3'], columns=['c1', 'c2', 'c3', 'c4'])

df = pd.DataFrame(unumpy.uarray(value.values, err.values), 
                                index=['r1', 'r2', 'r3'], 
                                columns=['c1', 'c2', 'c3', 'c4'])

df.sum(axis=0)  # This works
df.sum(axis=1)  # This works
df.loc[['r1', 'r3'], :].sum(axis=1)  # This works
df.iloc[0:2, 1:].sum(axis=1)  # This works

df.mean(axis=0)  # This does not work: it can be fudged with: df.apply(lambda x: x.sum() / x.shape[0])
df.mean(axis=1)  # This does not work: it can be fudged with: df.T.apply(lambda x: x.sum() / x.shape[0])
df.loc[['r1', 'r3'], :].mean(axis=1)  # This does not work

Inconsistency between `min` and `<`

I have a dictionary, results, containing the following values:

{'DC_NO': 115439.5272+/-227.5783569,
 'DC_NO_uniform_prior': 128170.5718+/-254.9207861,
 'NDC_NO': 121293.7951+/-67.26122141,
 'Nested_DC_NO': 115361.3262+/-400.9781353,
 'Nested_DC_NO_uniform_prior': 128369.6196+/-804.0209628,
 'Nested_DC_O': 119185.6415+/-0,
 'Nested_NDC_NO': 120300.3604+/-259.6579933,
 'Nested_NDC_O': 133606.5811+/-1661.534633}

If I run min(results) I obtain the answer 'DC_NO'. The nominal value of 'DC_NO' of course if greater than that of 'Nested_DC_NO' so I tested results['DC_NO']<results['Nested_DC_NO'] which returns the answer False, thus contradicting the previous result. One might of course argue that the error on 'DC_NO' is smaller and thus it might be more likely to be smaller than 'Nested_DC_NO' and therefore min() gives me the correct answer but the fact that < disagrees with this assessment surprised me. To me it appears that there is an inconsistency in the implementation of min() and <. What is this caused by? Or is this me making a mistake?

Weird behaviour doing repeated calculations

Hi!
I found something very weird (a bug? user error?) when doing calculations for a number of ufloats in a list. Running this script

from uncertainties import ufloat
import uncertainties.umath as um

k = ufloat(0.245548, 0.003834, 'spring constant')
r = ufloat(0.29, 0.001, 'distance')
alpha = ufloat(0,um.radians(1.0), 'angle')

deflection_list = [ufloat(0.0, 1e-6, 'deflection distance'),
                   ufloat(0.855e-3, 1e-6, 'deflection distance')]

for x in deflection_list:
    theta = x/r
    F = k * theta / (r * um.cos(alpha))
    print('Deflection x={:.2f} um\nF={:.2f} uN'.format(1e6*x, 1e6*F))
    print('Error components:')
    for k, v in F.error_components().items():
        print(k.tag, v)
    print('\n')

repeatedly gives different results for F for the second item in deflection_list. The first item (deflection 0) always stays constant. The problem disappears as soon as there is only one entry in deflection_list.

One run prints

Deflection x=0.00+/-1.00 um
F=0.00+/-2.92 uN
Error components:
spring constant 0.0
deflection distance 2.919714625445898e-06
distance 0.0
angle 0.0


Deflection x=855.00+/-1.00 um
F=0.00+/-177.44 uN
Error components:
angle 0.00017743834844888845
distance 0.0
deflection distance 0.0

(which is incorrect, the second F should not have an expected value of 0).
Running the script again gives

Deflection x=0.00+/-1.00 um
F=0.00+/-2.92 uN
Error components:
spring constant 0.0
angle 0.0
deflection distance 2.919714625445898e-06
distance 0.0


Deflection x=855.00+/-1.00 um
F=2948.28+/-10.74 uN
Error components:
distance 1.0166468489892985e-05
angle 0.0
deflection distance 3.4482758620689654e-06

(now we get a value for the second F, but why is the script behaviour not deterministic?!?)

Additionally, on some runs the error contribution for the deflection distance occurs twice for an unknown reason, the computation didn't change!:

Deflection x=0.00+/-1.00 um
F=0.00+/-2.92 uN
Error components:
spring constant 0.0
angle 0.0
distance 0.0
deflection distance 2.919714625445898e-06


Deflection x=855.00+/-1.00 um
F=0.00+/-0.01 uN
Error components:
deflection distance 1.0166468489892985e-08
deflection distance 0.0
distance 0.0
angle 0.0

Any idea what's going wrong here?
This is on Python 3.5.1, Anaconda 2.4.0 (64-bit), uncertainties 2.4.7.1 installed with pip.

Continious Integration

Travis-CI was disabled in August for this repo. Should we add it back? The python versions that are tested in .travis.yml also might need to be updated.

Also I'm not sure if it would be useful to add CI tests on windows with AppvVeyor? In which case I can add the setup file.

Make wrapped function to accept arrays

Hi,

I successfully wrapped a scalar function with four parameters of which the latter two have uncertainties:

def tau_angstrom(wvl, wvl0, tau0, alpha):
    return tau0 * (wvl / wvl0)**(-alpha)

def tau_angstrom_dev1_tau0(wvl, wvl0, tau0, alpha):
    return tau_angstrom(wvl, wvl0, tau0, alpha) / tau0

def tau_angstrom_dev1_alpha(wvl, wvl0, tau0, alpha):
    return -tau_angstrom(wvl, wvl0, tau0, alpha) * np.log((wvl / wvl0))

tau_angstrom_f = uncertainties.wrap(tau_angstrom, [None, None, 
        tau_angstrom_dev1_tau0, tau_angstrom_dev1_alpha])

For ease of code, I would like to also accept arrays for the parameter wvl, how can this be achieved? unumpy.core.wrap_array_func results in an error.

Provide upper and lower bounds as array

We currently have:

In [1]: from uncertainties import ufloat
In [2]: x = ufloat(10, 0.5)
In [3]: x.nominal_value
Out[3]: 10.0
In [4]: x.std_dev
Out[4]: 0.5

Would it be a good idea to have something along the lines of:

In [5]: x.bounds
Out[5]: (9.5, 10.5)

Module wrapping

I am cross posting this here from hgrecco/pint#24

It would be nice to transform your umath module in a function plus a function call

import math

def wrap_module(module, wrapped, wrapfun):
    # Here goes most of the code inside umath
    # but replacing 
    # math -> wrapped
    # wraps -> wrapfun

wrap_module(sys.modules[__name__], math, wraps)

In this way, other people using your library for their own classes (like me!) could just call wrap_module with a different wrapping function.

What do you think?

Plans for native Python 3 suport and dropping support for older Python versions

Currently Uncertainties supports python 2.4-2.7 with support of python 3.2-3.3 through 2to3 converter. There is also the Python 2.3 branch that appears to be still periodically synced.
untitled

The support of such outdated python versions (e.g. python 2.3 was installed in Ubuntu Hardy LTS released in 2008 which reached end of life in 2013) is fairly unconventional in the scientific python community were currently most packages support 2.6-2.7 and natively 3.2-3.5 (i.e. without using 2to3), cf. for instance the last numpy release which is also an optional dependency. Also see the Python 3 statement for the dropping of support for PY2 in major scientific python packages by 2020.

The ongoing support for outdated python versions 2.4-2.5, 2.3 and to a lesser extent 2.6, and the lack of a native 3.2+ support, prevents from using the last features and optimizations of the language (also related to issue #52) and in my opinion might also dissuade new contributors who might have already switched to PY3.

Is there any plans to add native Python 3.2-3.5 support and drop support for older Python versions in the future releases? I would be happy to work on this.

Cumulative sums are quadratic in time: is it possible to make them linear in time?

Sums are calculated in linear time, starting with version 3.0. But cumulative sums are calculated in quadratic time (2 minutes for 10,000 terms, on my machine—much faster than even a simple sum of 10,000 terms with version 2.x, but still quite slow). Could cumulative sums also be accelerated?

Note: time measurements must force the calculation of the standard deviation of each result. For instance, map(lambda x: x.std_dev, arr.cumsum()) with arr = unp.uarray(np.full(N, 1), np.full(N, 1)) and, e.g., N = 5000 is a correct code for doing a timing. Doing str(arr.cumsum()) is not, as only a certain number of terms is printed, so that not all standard deviations are actually calculated (since they are lazily evaluated). np.cumsum() is also not at all a good expression for timing measurements (for the same reason).

Dealing with two errors

Dear lebigot,

first of all let me thank you for such an awesome package.
I have a question: I am dealing with numbers that have TWO errors, one is statistical, the other is the so called "systematic" and is considered good practice to deal with them separately.
Something like:
1.0 +/- 0.1 +/- 0.05
The two errors are considered to be totally uncorrelated.

Do you have any suggestion on how to treat this problem with your package?
I tried looking up in the documentation but I could not find anything useful

Thank you

`nominal_value(x)` on array

What is the best way to construct a correlated array of values while having an easy way to extract nominal_value and std_dev? I wanted to get the nominal values of an array, but uncertainties.nominal_value doesn't work on a regular numpy array. I could construct a uarray by looping through all values and use the nominal_value attribute of uarray, but that doesn't sound optimal.

In [379]: x = uncertainties.correlated_values([0,1], np.eye(2)), x
Out[379]: (0.0+/-1.0, 1.0+/-1.0)

In [381]: uncertainties.nominal_value(np.array(x))
Out[381]: array([0.0+/-1.0, 1.0+/-1.0], dtype=object)

In [382]: uncertainties.unumpy.uarray(x)
-c:1: UserWarning: Obsolete: uarray() should now be called with two arguments. Code can be automatically updated with python -m uncertainties.1to2 -w ProgramDirectory.
TypeError: can't convert an affine function (<class 'uncertainties.AffineScalarFunc'>) to float; use x.nominal_value

Summing unusably slow

I just want to make sure this is a limitation of the package, and not my own fault. But it seems that summing large arrays is prohibitively slow. I have several arrays with about 20,000 points, each with errors, but it's impossible to find the mean because it takes too long to calculate. To be more specific, I'm getting the mean to normalize the array

alpha = 1.0 + (alpha - alpha.mean()) / (alpha.max() - alpha.min())

Is there any quicker way this can be done? Thanks.

numpy.nanmean() does not skip nan±… or …±nan

Hello!

First of all, great piece of work! It's saving me a lot of time :)

I'm having issues with numpy.nanmean that should ignore nan values when calculating the mean.

Here some test code:

from uncertainties import unumpy
import numpy as np
v = np.arange(16,dtype=np.float64)
e = np.sqrt(v)
v[1:3] = np.nan
print(v)
print(np.isnan(v[1:3]))
un = unumpy.uarray(v,e)
print(un)
print(un.mean())
print(np.nanmean(un))
print(v.mean())
print(np.nanmean(v))

Here the output:

[  0.  nan  nan   3.   4.   5.   6.   7.   8.   9.  10.  11.  12.  13.  14.
  15.]
[ True  True]
[0.0+/-0 nan+/-1.0 nan+/-1.4142135623730951 3.0+/-1.7320508075688772
 4.0+/-2.0 5.0+/-2.23606797749979 6.0+/-2.449489742783178
 7.0+/-2.6457513110645907 8.0+/-2.8284271247461903 9.0+/-3.0
 10.0+/-3.1622776601683795 11.0+/-3.3166247903554 12.0+/-3.4641016151377544
 13.0+/-3.605551275463989 14.0+/-3.7416573867739413
 15.0+/-3.872983346207417]
nan+/-0.6846531968814576
nan+/-0.6846531968814576
nan
8.35714285714

From the output, you can see that both mean and nanmean are returning nan+/-error. I'd say that the later should return the mean ignoring the nan values.

I hope you can help with that!
Thanks

Deprecated support for calling ufloat with one argument is now broken

In [2]: ufloat(1)
/usr/bin/ipython:1: UserWarning: Obsolete: either use ufloat(nominal_value, std_dev), ufloat(nominal_value, std_dev, tag), or the ufloat_fromstr() function, for string representations. Code can be automatically updated with python -m uncertainties.1to2 -w ProgramDirectory.
  #!/usr/bin/python3
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/usr/lib/python3.5/site-packages/uncertainties/core.py in ufloat(nominal_value, std_dev, tag)
   3069         # Standard case:
-> 3070         return Variable(nominal_value, std_dev, tag=tag)
   3071     # Exception types raised by, respectively: tuple or string that

/usr/lib/python3.5/site-packages/uncertainties/core.py in __init__(self, value, std_dev, tag)
   2584 
-> 2585         self.std_dev = std_dev  # Assignment through a Python property
   2586 

/usr/lib/python3.5/site-packages/uncertainties/core.py in std_dev(self, std_dev)
   2604         # should work on most platforms.)
-> 2605         if std_dev < 0 and not isinfinite(std_dev):
   2606             raise NegativeStdDev("The standard deviation cannot be negative")

TypeError: unorderable types: NoneType() < int()

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
<ipython-input-2-8cf5b3abea25> in <module>()
----> 1 ufloat(1)

/usr/lib/python3.5/site-packages/uncertainties/core.py in ufloat(nominal_value, std_dev, tag)
   3084             tag_arg = std_dev  # 2 positional arguments form
   3085 
-> 3086         return _ufloat_obsolete(nominal_value, tag_arg)

/usr/lib/python3.5/site-packages/uncertainties/core.py in _ufloat_obsolete(representation, tag)
   3024         return ufloat(representation[0], representation[1], tag)
   3025     else:
-> 3026         return ufloat_fromstr(representation, tag)
   3027 
   3028 # The arguments are named for the new version, instead of bearing

/usr/lib/python3.5/site-packages/uncertainties/core.py in ufloat_fromstr(representation, tag)
   3008 
   3009     (nominal_value, std_dev) = str_to_number_with_uncert(
-> 3010         representation.strip())
   3011 
   3012     return ufloat(nominal_value, std_dev, tag)

AttributeError: 'int' object has no attribute 'strip'

I don't actually use the one-argument form (... except when making a typo) and given that it seems to have been deprecated a while ago, perhaps ufloat's signature should just be modified to have std_dev as a required argument?

Documentation for unumpy.isnan(), etc. is unhelpful

The documentation for unumpy.isnan() is the documentation for numpy.vectorize(). This is not helpful. It would be better to display the NumPy documentation, prefixed by a note about unumpy.isnan() being a generalization to numbers with uncertainty.

formatting an infinite value fails

In [4]: "{}".format(ufloat(np.inf, 0))
---------------------------------------------------------------------------
OverflowError                             Traceback (most recent call last)
<ipython-input-4-e46e3ab01e12> in <module>()
----> 1 "{}".format(ufloat(np.inf, 0))

/usr/lib/python3.5/site-packages/uncertainties/__init__.py in __format__(self, format_spec)
   2207                 digits_limit = (
   2208                     signif_dgt_to_limit(exp_ref_value, num_signif_digits)
-> 2209                     if non_nan_values
   2210                     else None)
   2211 

/usr/lib/python3.5/site-packages/uncertainties/__init__.py in signif_dgt_to_limit(value, num_signif_d)
   1637     '''
   1638 
-> 1639     fst_digit = first_digit(value)
   1640 
   1641     limit_no_rounding = fst_digit-num_signif_d+1

/usr/lib/python3.5/site-packages/uncertainties/__init__.py in first_digit(value)
   1098     # ValueError, so the value is directly tested:
   1099     if value:
-> 1100         return int(math.floor(math.log10(abs(value))))
   1101     else:
   1102         return 0

OverflowError: cannot convert float infinity to integer

and similarly with ufloat(np.inf, np.inf).
Obviously this uncertainty itself isn't "real", but still useful in intermediate calculations.

Inconsistent ways of accessing .nominal_value and .std_dev()

I would have expected that the ufloat nominal_value and std_dev would be called in the same way, but they aren't.

A short example of what I mean:

import uncertainties
test1 = uncertainties.ufloat((3, 1))
print test1
3.0+/-1.0

# I would have expected the following to yield the same behavior:
print test1.nominal_value
3.0
print test1.std_dev
<bound method Variable.std_dev of 3.0+/-1.0>

# And, I would have expected the following to yield the same behavior:    
print test1.std_dev()
1.0
print test1.nominal_value()
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-9-719980b52ebb> in <module>()
      3 print test1.std_dev
      4 print test1.std_dev()
----> 5 print test1.nominal_value()

TypeError: 'float' object is not callable

Perhaps there is a specific reason for this implementation, but I wanted to bring it to your attention.

NumPy 1.8 breaks mean() in arrays

I upgraded NumPy to 1.8 recently and it broke my apps that use uncertainties. I haven't switch them to 2.0+ yet and understand that you may not be interested in making older versions compatible with newer version of NumPy, so you can close this if that is the case. But here is the bug nonetheless.

Here is pre 1.8 behavior:

In [1]: import numpy

In [2]: from uncertainties import unumpy

In [3]: a = unumpy.uarray(([1.0, 2.0], [0.1, 0.2]))

In [4]: a
Out[4]: array([1.0+/-0.1, 2.0+/-0.2], dtype=object)

In [5]: a.mean()
Out[5]: 1.5+/-0.1118033988749895

In [6]: numpy.mean(a)
Out[6]: 1.5+/-0.1118033988749895

In [7]: numpy.__version__
Out[7]: '1.7.1'

In [8]: import uncertainties

In [10]: uncertainties.__version__
Out[10]: '1.9.1'

Now with NumPy 1.8. It seems the _mean function has changed and is looking for something that isn't there. I haven't investigated further.

Python 2.7.5+ (default, Sep 19 2013, 13:48:49) 
Type "copyright", "credits" or "license" for more information.

IPython 1.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import uncertainties

In [2]: import numpy

In [3]: uncertainties.__version__
Out[3]: '1.9.1'

In [4]: numpy.__version__
Out[4]: '1.8.0'

In [5]: from uncertainties import unumpy

In [6]: a = unumpy.uarray(([1.0, 2.0], [0.1, 0.2]))

In [7]: numpy.mean(a)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-7-a6fa0915df2b> in <module>()
----> 1 numpy.mean(a)

/home/moorepants/envs/uncertainties/lib/python2.7/site-packages/numpy/core/fromnumeric.pyc in mean(a, axis, dtype, out, keepdims)
   2714 
   2715     return _methods._mean(a, axis=axis, dtype=dtype,
-> 2716                             out=out, keepdims=keepdims)
   2717 
   2718 def std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False):

/home/moorepants/envs/uncertainties/lib/python2.7/site-packages/numpy/core/_methods.pyc in _mean(a, axis, dtype, out, keepdims)
     65                 ret, rcount, out=ret, casting='unsafe', subok=False)
     66     else:
---> 67         ret = ret.dtype.type(ret / rcount)
     68 
     69     return ret

AttributeError: 'AffineScalarFunc' object has no attribute 'dtype'

In [8]: a.mean()
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-8-dc887f7a7bfb> in <module>()
----> 1 a.mean()

/home/moorepants/envs/uncertainties/lib/python2.7/site-packages/numpy/core/_methods.pyc in _mean(a, axis, dtype, out, keepdims)
     65                 ret, rcount, out=ret, casting='unsafe', subok=False)
     66     else:
---> 67         ret = ret.dtype.type(ret / rcount)
     68 
     69     return ret

AttributeError: 'AffineScalarFunc' object has no attribute 'dtype'

Printing test fails for Python 2.6.

One unit test fails for Python 2.6: in test_uncertainties.py:

Incorrect representation '-000000000.0inf(inf)' for format '020S' of -inf+/-inf: '-00000000000inf(inf)' expected.

This works for Python 2.7.

Make the whole code of the Python 2.7 version use even more of Python 2.7's features

The Python 2.7 code is mostly based on the last Python 2.6 and even Python 2.5 version. Some details of the code could be updated so as to take advantage of the features that Python 2.7's brought, or so as to be adapted to Python 2.7 directly (parts of the code handle differences between Python 2.6 and Python 2.7, for instance).

Examples:

  • Dictionary comprehension,
  • Sets literals,
  • String formatting with {}.

The code tends to contain comments like Python 2.5 or Python 2.6 or Python 2.7+ in places where the code could be upgraded.

Unexpected file on PyPI

The tar.gz file on PyPI contains doc/_templates/layout.html~ , maybe it should be removed.

error when using pandas with nan+/-nan

Hi,

I'm not sure if this is an uncertainties error, might also well be an error in pandas. Displaying a pandas.DataFrame with nan+/-nan as a member fails, try:

 >>> import numpy as np
 >>> import pandas as pd
 >>> import uncertainties
 >>> pd.DataFrame([uncertainties.ufloat(np.nan, np.nan)])
 <repr(<pandas.core.frame.DataFrame at 0x7fc00e688240>) failed: ValueError: cannot convert float 
NaN to integer>

With a bit of searching one can get a better stacktrace using:

 >>> pd.DataFrame([uf(np.nan, np.nan)]).to_string()
 ---------------------------------------------------------------------------
 ValueError                                Traceback (most recent call last)
 <ipython-input-13-a2e4d915ea24> in <module>()
 ----> 1 pd.DataFrame([uf(np.nan, np.nan)]).to_string()

 /usr/lib/python3/dist-packages/pandas/core/frame.py in to_string(self, buf, columns, col_space, colSpace, header, index, na_rep, formatters, float_format, sparsify, index_names, justify, line_width, max_rows, max_cols, show_dimensions)
    1291                                            max_cols=max_cols,
    1292                                            show_dimensions=show_dimensions)
 -> 1293         formatter.to_string()
    1294 
    1295         if buf is None:

 /usr/lib/python3/dist-packages/pandas/core/format.py in to_string(self)
     439             text = info_line
     440         else:
 --> 441             strcols = self._to_str_columns()
     442             if self.line_width is None:
     443                 text = adjoin(1, *strcols)

 /usr/lib/python3/dist-packages/pandas/core/format.py in _to_str_columns(self)
     366                                    *(_strlen(x) for x in cheader))
     367 
 --> 368                 fmt_values = self._format_col(i)
     369 
     370                 fmt_values = _make_fixed_width(fmt_values, self.justify,

 /usr/lib/python3/dist-packages/pandas/core/format.py in _format_col(self, i)
     576             (frame.iloc[:, i]).get_values(),
     577             formatter, float_format=self.float_format, na_rep=self.na_rep,
 --> 578             space=self.col_space
     579         )
     580 

 /usr/lib/python3/dist-packages/pandas/core/format.py in format_array(values, formatter, float_format, na_rep, digits, space, justify)
    1763                         justify=justify)
    1764 
 -> 1765     return fmt_obj.get_result()
    1766 
    1767 

 /usr/lib/python3/dist-packages/pandas/core/format.py in get_result(self)
    1779 
    1780     def get_result(self):
 -> 1781         fmt_values = self._format_strings()
    1782         return _make_fixed_width(fmt_values, self.justify)
    1783 

 /usr/lib/python3/dist-packages/pandas/core/format.py in _format_strings(self)
    1817                 fmt_values.append(float_format(v))
    1818             else:
 -> 1819                 fmt_values.append(' %s' % _format(v))
    1820 
    1821         return fmt_values

 /usr/lib/python3/dist-packages/pandas/core/format.py in _format(x)
    1803             else:
    1804                 # object dtype
 -> 1805                 return '%s' % formatter(x)
    1806 
    1807         vals = self.values

 /usr/lib/python3/dist-packages/pandas/core/format.py in <lambda>(x)
    1792 
    1793         formatter = self.formatter if self.formatter is not None else \
 -> 1794             (lambda x: com.pprint_thing(x, escape_chars=('\t', '\r', '\n')))
    1795 
    1796         def _format(x):

 /usr/lib/python3/dist-packages/pandas/core/common.py in pprint_thing(thing, _nest_lvl, escape_chars, default_escapes, quote_strings)
    2863         result = fmt % as_escaped_unicode(thing)
    2864     else:
 -> 2865         result = as_escaped_unicode(thing)
    2866 
    2867     return compat.text_type(result)  # always unicode

 /usr/lib/python3/dist-packages/pandas/core/common.py in as_escaped_unicode(thing, escape_chars)
    2825 
    2826         try:
 -> 2827             result = compat.text_type(thing)  # we should try this first
    2828         except UnicodeDecodeError:
    2829             # either utf-8 or we replace errors

 /usr/lib/python3/dist-packages/uncertainties/__init__.py in __str__(self)
    1935         # string
    1936         # (http://docs.python.org/2/library/string.html#format-specification-mini-language):
 -> 1937         return self.__format__('')  # Works with Python < 2.6, not format()
    1938 
    1939     def __format__(self, format_spec):

 /usr/lib/python3/dist-packages/uncertainties/__init__.py in __format__(self, format_spec)
    2213                 # example for determining the exponent:
    2214                 digits_limit = signif_d_to_limit(exp_ref_value,
 -> 2215                                                  num_signif_digits)
    2216 
    2217                 # print "EXP_REF_VAL", exp_ref_value

 /usr/lib/python3/dist-packages/uncertainties/__init__.py in signif_d_to_limit(value, num_signif_d)
    1675     '''
    1676 
 -> 1677     fst_digit = first_digit(value)
    1678 
    1679     limit_no_rounding = fst_digit-num_signif_d+1

 /usr/lib/python3/dist-packages/uncertainties/__init__.py in first_digit(value)
    1115     # ValueError, so the value is directly tested:
    1116     if value:
 -> 1117         return int(math.floor(math.log10(abs(value))))
    1118     else:
    1119         return 0

 ValueError: cannot convert float NaN to integer

which seems to boil down to some rounding not working as expected. As I said I'm not sure what's wrong here, as for example np.array([uncertainties.ufloat(np.nan, np.nan)]) displays just well, it could also be that pandas overzealously tries to round stuff?

Cheers,

Mika

working with units?

I would like to work with units, for example:

>>> import astropy.units as u
>>> from uncertainties import ufloat
>>> d = ufloat(211, 55) * u.pc
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-3-c62b18eb7db0> in <module>()
----> 1 d = ufloat(211, 55) * u.pc

TypeError: unsupported operand type(s) for *: 'Variable' and 'Unit'

Is there another way to use units with the uncertainties package?

ImportError: No module named 'lib2to3.tests' -- while running tests

$ python3 -V
Python 3.4.3+
$ nosetests3 --version
nosetests3 version 1.3.6
$ nosetests3 uncertainties
$ python3 -c 'import lib2to3.tests'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named 'lib2to3.tests'
$ nosetests3 uncertainties
E.........../home/pwaller/.local/src/uncertainties/build/lib/uncertainties/unumpy/test_unumpy.py:318: UserWarning: Obsolete: uarray() should now be called with two arguments. Code can be automatically updated with python -m uncertainties.1to2 -w ProgramDirectory.
  arr_obs = unumpy.uarray.__call__(([1, 2], [1, 4]))  # Obsolete call
[ lots of warnings about obsolete stuff ]
======================================================================
ERROR: Failure: ImportError (No module named 'lib2to3.tests')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/pwaller/.local/src/uncertainties/build/lib/uncertainties/lib1to2/test_1to2.py", line 39, in <module>
    import test.test_lib2to3.support as support
ImportError: No module named 'test.test_lib2to3'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/nose/failure.py", line 39, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/usr/lib/python3/dist-packages/nose/loader.py", line 420, in loadTestsFromName
    addr.filename, addr.module)
  File "/usr/lib/python3/dist-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/usr/lib/python3/dist-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/usr/lib/python3.4/imp.py", line 235, in load_module
    return load_source(name, filename, file)
  File "/usr/lib/python3.4/imp.py", line 171, in load_source
    module = methods.load()
  File "<frozen importlib._bootstrap>", line 1220, in load
  File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 1129, in _exec
  File "<frozen importlib._bootstrap>", line 1471, in exec_module
  File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
  File "/home/pwaller/.local/src/uncertainties/build/lib/uncertainties/lib1to2/test_1to2.py", line 42, in <module>
    import lib2to3.tests.support as support
ImportError: No module named 'lib2to3.tests'

----------------------------------------------------------------------
Ran 52 tests in 2.964s

FAILED (errors=1)

Many internal functions appear in IPython

Many internal functions that should not be exposed (e.g. because they have limited functionalities and are optimized for internal use) are exposed in the sense that they are not protected with a leader underline character. This makes completion in, say, an IPython shell list too many functions compared to what a user should normally need. Internal functions are meant to be internal until it appears useful to make them available—in which case they might need to be generalized, and would need to be in the HTML documentation.

Scalability of the uncertainty propagation in numpy arrays

Statement of the problem

Currently uncertainties supports numpy arrays by stacking uncertainties.ufloat objects inside a numpy.array( , dtype="object") array. This is certainty nice as it allows to automatically use uncertainty propagation with all existing numpy.arrays operators. However this also poses significant performance limitations, as the logic of error propagation is needlessly repeatedly calculated for every element of the array.

Here is a quick benchmark for the current implementation (v0.3),

%load_ext memory_profiler

import uncertainties as un
from uncertainties import unumpy as unp
import numpy as np

N = 100000
x = np.random.rand(N)
x_err = np.random.rand(N)

ux = unp.uarray(x, x_err)

print('== Numpy ==')
%timeit x**2
%memit x**2
print('== Unumpy ==')
%timeit ux**2
%memit ux**2

# == Numpy ==
# The slowest run took 5.22 times longer than the fastest. This could mean that an intermediate result is being cached 
#10000 loops, best of 3: 57.7 µs per loop
# peak memory: 98.48 MiB, increment: 0.29 MiB
# == Unumpy ==
#1 loops, best of 3: 816 ms per loop
# peak memory: 132.66 MiB, increment: 29.13 MiB

N = 100
x = np.random.rand(N,N)
x_err = np.random.rand(N,N)

ux = unp.uarray(x, x_err)

print('== Numpy ==')
%timeit x.dot(x)
%memit x.dot(x)
print('== Unumpy == ')
%timeit ux.dot(x)
%memit ux.dot(x)

# == Numpy ==
#10000 loops, best of 3: 88.1 µs per loop
# peak memory: 78.93 MiB, increment: 0.07 MiB
# == Unumpy == 
#1 loops, best of 3: 14.7 s per loop
# peak memory: 543.95 MiB, increment: 435.94 MiB

while some decrease in performance is of course expected for the error propagation, currently for simple array operations the performance is as follows,

  • for squaring a 100000 long vector: unumpy is 1400x slower than numpy and takes 100x more memory
  • for a (100, 100) matrix multiplication: unumpy is 170000x slower than numpy and takes 6200x times more memory. Which means e.g. that I'm running out of memory when trying to do a (1000, 1000) matrix multiplication.

Proposed solution

I believe that making the unumpy.uarray return a custom undaray object (and not a np.array(.., dtype='object')) which would store the mean value and standard deviation in 2 numpy arrays e.g. undarray.n and undarray.s, then implementing the logic of error propagation at the array level would address both the memory usage and performance issues.

The way masked arrays , also constructed from 2 numpy arrays, are implemented as a class inheriting from ndarray, could be used as an example.

Possible impact

This also goes along with Issue #47 , as if operators are defined as methods of this undarray object with a numpy compatible API, the existing code with numpy operators (e.g. np.sum( undarray )) might just work out of the box [needs confirmation].

Might affect issue #53.

In order to keep backward compatibility, it could be possible to keep the current behavior with

 arr = numpy.array([ufloat(1, 0.1), ufloat(2, 0.002)])
 arr = unumpy.uarray( np.array([1, 1]), np.array([0.1, 0.002]))  # assumes dtype='object'

then switch to this new backend with

 arr = unumpy.uarray( np.array([1, 1]), np.array([0.1, 0.002]), dtype='uarray')

This would require significant work to make all the operators work, and at first only a subset of operators may be supported, but I believe that performance improvement for unumpy would help a lot for this package to be used in any medium scale or production applications.

I would be happy to work on this. What do you think @lebigot ?

NaN instead of ZeroDivisionError in unumpy

Can I get inf or nan for 1/0 as it works with numpy?

from uncertainties import unumpy
import numpy

print  1./numpy.array([0])
print  1./unumpy.uarray(([0], 0))
[ inf]
ZeroDivisionError: float division by zero

Plotting data with error bars/band?

I am not sure whether this belongs in uncertainties (in order to keep it light and with few external dependencies), but I wanted to visualise uncertain measurement data and had to come up with a snippet like this.

In my experience, having visualizing tools handy really helps in development and debugging, which might be an argument in favor of creating a uplot module (and only this module requiring matplotlib). Other error visualisation (tool tips showing the chain, tags, ...) may be less trivial and useful to share.

What do others think?

from matplotlib import pyplot
from uncertainties import unumpy

def plot(x,y,*args,**kwargs):
    nominal_curve = pyplot.plot(x, unumpy.nominal_values(y), *args, **kwargs)
    pyplot.fill_between(x, 
                        unumpy.nominal_values(y)-unumpy.std_devs(y), 
                        unumpy.nominal_values(y)+unumpy.std_devs(y),
                        facecolor=nominal_curve[0].get_color(),
                        edgecolor='face',
                        alpha=0.1,
                        linewidth=0)
    return nominal_curve

if __name__ == '__main__':
    import numpy
    from uncertainties import ufloat
    x = numpy.linspace(0, 15, 120)

    eps = ufloat(0,0.4)
    y = unumpy.sin((x+eps)/6*numpy.pi)

    plot(x,y,'r--',label='sine')
    pyplot.legend()
    pyplot.show()

Tagging correlated variables

Eric,

Your uncertainties package is awesome. One use case I have encountered is to use the correlated_values function to create new variables with uncertainty based on the covariance from a separate procedure. However, the variables that are returned do not have a tag attribute, and I cannot set the tag attribute.

(From my limited knowledge of python internals, it seems that this is because the AffineScalarFunc variables that are returned use the __slots__ to limit the attributes that can be used.)

So my request is to have the __slots__ include a tag as well.

-Sterling

Incorrect application of PDG rounding rules?

In [7]: format(ufloat(724.2, 26.4), "")
Out[7]: '724.2+/-26.4'

I'd expect 724+/-26 if my reading of the PDG rules is correct (and anyways it makes sense to avoid printing an uncertainty with 3 significant digits :-)).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.