Git Product home page Git Product logo

chaospy's Introduction

circleci codecov readthedocs downloads pypi

Chaospy is a numerical toolbox designed for performing uncertainty quantification through polynomial chaos expansions and advanced Monte Carlo methods implemented in Python. It includes a comprehensive suite of tools for low-discrepancy sampling, quadrature creation, polynomial manipulations, and much more.

The philosophy behind chaospy is not to serve as a single solution for all uncertainty quantification challenges, but rather to provide specific tools that empower users to solve problems themselves. This approach accommodates well-established problems but also serves as a foundry for experimenting with new, emerging problems. Emphasis is placed on the following:

Installation

Installation is straightforward via pip:

pip install chaospy

Alternatively, if you prefer Conda:

conda install -c conda-forge chaospy

After installation, visit the documentation to learn how to use the toolbox.

Development

To install chaospy and its dependencies in developer mode:

pip install -e .[dev]

Testing

To run tests on your local system:

pytest --doctest-modules chaospy/ tests/ README.rst

Documentation

Ensure that pandoc is installed and available in your path to build the documentation.

From the docs/ directory, build the documentation locally using:

cd docs/
make html

Run make without arguments to view other build targets. The HTML documentation will be output to doc/.build/html.

chaospy's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chaospy's Issues

Truncated Gamma distribution?

Hi,

I am a novel user and probably this question is very trivial - I'll try to ask it anyways.

I would need to generate a distribution type (truncated gamma, having an upper threshold in addition to the lower) non included in the list of default distribution, from which I should select a sample according to the Sobol rule. I know I could simply generate the distribution with SciPy and then try to extract a sample coding the Sobol-rule algorithm it. However, I was wondering if there was any ChaosPy functionality that would allow to tackle this issue straightforwardly.

Many thanks in advance for your help and apologies for my question, should it be too trivial. :)

PCE of a set of data points

Hi, I am a postdoc researcher at Université du Luxembourg.

I have a vector which contains a set of data points (Monte-Carlo realisations).

I would like to use chaospy to compute the PCE of the data points by choosing the order and the type of polynomials (Hermite, Jacobi, etc.).

Sincerely,
Paul.

Lagrange polynomials

Hi there!

I was trying to use the lagrange_polynomial function which is in chaospy/orthogonal.py. I assume that those are the Lagrange basis polynomials. I have encountered the following issues:

  1. Calling chaospy.orthogonal.lagrange_polynomial([0,1]) (or with any number combination, as long as there is a zero in there) results in the error "raise numpy.linalg.LinAlgError("invertable matrix") LinAlgError: invertable matrix".

  2. Calling chaospy.orthogonal.lagrange_polynomial([1, 2]) results in the polynomials [-q0^2+2.0q0, 0.5q0^2-0.5q0]. However, using the definition of Lagrange polynomials, e.g. https://en.wikipedia.org/wiki/Lagrange_polynomial, one should get [-q0+2.0, q0-1.0].

Therefore, I would like to ask what exactly am I missing in how chaospy constructs Lagrange polynomials?

Best regards,
Dimitris

Bug with joint probability distribution

Hi,

I've recently started using chaospy, and I might have found a bug with the joint probability distributions.
Here is an example.

import numpy as np
import chaospy as cp

dist_1 = cp.Uniform(-7.0, 15.0)
dist_2 = cp.Uniform(-3.0, 2.0)

dist_J = cp.J(dist_1, dist_2)

dist_J * 2.0

Changing the last line into dist_J * np.array([[2.0]]) causes the code to crash somewhere else.

Best regards,
Riccardo

Joint distributions with dependencies and numpy.poly1d

I am trying to construct a joint distribution in which the second depends on the first. This is also used in the examples created here https://github.com/jp5000/PCE-4-WindEnergy/blob/master/Example/8)_Example.ipynb. I am running the development branch of my fork.

import numpy as np
import chaospy as cp
from matplotlib import pyplot as plt

# manually calculate the what np.poly1d is supposed to do
def calc_poly(poly, x):
    exp = 0.0
    degree = len(poly)
    for i, coeff in enumerate(poly):
        exp += (coeff*np.power(x, degree-i))
    return exp

def p1(x):
    return ( 1. + (1.4)**2./(0.75*x + 3.8)**2.0)**0.5

def p2(x):
    return ( 1. + (1.4)**2./(0.75*x + 3.8)**2.0)

# we have the following dependency
dist1_x_rng = np.arange(0, 30, 1)
mu = p1(dist1_x_rng)
sigma = p2(dist1_x_rng)

# fit a polynomial to it
mu_pfit = np.poly1d(np.polyfit(dist1_x_rng, mu, 6))
sigma_pfit = np.poly1d(np.polyfit(dist1_x_rng, sigma, 6))

# is calc_poly doing what it is supposed to do?
assert np.allclose(calc_poly(mu_pfit, dist1_x_rng), mu_pfit(dist1_x_rng))

#plt.plot(dist1_x_rng, mu)
#plt.plot(dist1_x_rng, calc_poly(mu_pfit, dist1_x_rng))
#assert np.allclose(calc_poly(mu_pfit, dist1_x_rng), mu)

dist_1 = cp.Normal()
# indicate the dependency using the np.poly1d object
dist_2a = cp.Lognormal(mu=mu_pfit(dist_1), sigma=sigma_pfit(dist_1))
# or using calc_poly
dist_2b = cp.Lognormal(mu=calc_poly(mu_pfit, dist_1), sigma=calc_poly(sigma_pfit, dist_1))
# or using the polynomials directly
dist_2c = cp.Lognormal(mu=p1(dist_1), sigma=p2(dist_1))

# join the distribution and sample, but this will fail
dist_Q_a = cp.J(dist_1, dist_2a)
sample_Q_a = dist_Q_a.sample(size=512, rule='H')
# see below for the lengthy traceback

# works
dist_Q_b = cp.J(dist_1, dist_2b)
sample_Q_b = dist_Q_b.sample(size=512, rule='H')

# works
dist_Q_c = cp.J(dist_1, dist_2c)
sample_Q_c = dist_Q_c.sample(size=512, rule='H')

I think this example worked at some point with Chaospy but I haven't figured out when it broke. I will also add this as a test case in PR #31. I have not figured out yet why it fails with numpy.poly1d, but would like to create an issue for it in case someone knows already what is going on here.

Saving a PCE regression

Hello,

Thanks for this contribution. It is simply a pleasure to use your package!

I was wondering if there is a way to "store" a fitted regression for a point-collocated PCE.
On a long dataset, the regression can take a few minutes and I would like to avoid refitting whenever I would like to make use of the surrogate model.

Thanks in advance,
Best regards

Error when call cp.generate_quadrature

I have this error when try the example in chaospy. Could anyone let me know what was the problem and how to fix it? I am new in Python. Thank you so much.
Mai

import chaospy as cp
import numpy as np
import odespy
a = cp.Uniform(0, 0.1)
I = cp.Uniform(8, 10)
dist = cp.J(a, I)
order = 5
P, norms = cp.orth_ttr(order, dist, retall=True)
nodes, weights = cp.generate_quadrature(order+1, dist, rule="G")
Traceback (most recent call last):
File "", line 1, in
File "/home/trunghieumai/src/chaospy/chaospy/quadrature.py", line 105, in generate_quadrature
x, w = golub_welsch(order, domain, acc)
File "/home/trunghieumai/src/chaospy/chaospy/quadrature.py", line 242, in golub_welsch
for k in range(2*o[d]-3)])
File "/home/trunghieumai/src/chaospy/chaospy/dist/backend.py", line 399, in mom
return out.reshape(shape)
ValueError: total size of new array must be unchanged

numpy 1.11 'module' object has no attribute 'long128'

import numpy as np

print 'version', np.version.version
a = np.longdouble(3)
print a
b = np.longfloat(3)
print b
c = np.longlong(3)
print c
d = np.long128(3)
print d
version 1.11.2
3.0
3.0
3

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-22-77c371d67525> in <module>()
      9 c = np.longlong(3)
     10 print c
---> 11 d = np.long128(3)
     12 print d

AttributeError: 'module' object has no attribute 'long128'

errors in orth_bert

n = cp.Normal(0, 1)
polynomial_order = 3
poly = cp.orth_bert(polynomial_order, n, normed=True)

doesn't work for me.

The first issue was in chaospy/poly/basy.py line 200 was not checking for numpy.int types. I fixed this in my devbranch but now the error occurs when numpy's asarray calss iter . This seemed likely some special edge case for a 1D polynomial expansion, however, trying the following produces the same "not iterable" error. Any ideas on the correct way to fix this?

n = cp.Normal(0, 1)
n = cp.Iid(n,2)
polynomial_order = 3
poly = cp.orth_bert(polynomial_order, n, normed=True)

Bug: left truncation gives a negative pdf

Hi, I've tried to get a truncated distribution, but when I truncate to the left I get a negative pdf. However, the samples look correct. Here is an example, that I tested on the development branch.

import chaospy as cp
import matplotlib.pyplot as plt
import numpy as np

dist_1 = cp.Normal(0, 5)
dist_2 = cp.trunk(dist_1, 7)
dist_3 = cp.trunk(-5, dist_1)

sample_1 = dist_1.sample(int(1e3))
sample_2 = dist_2.sample(int(1e3))
sample_3 = dist_3.sample(int(1e3))

x = np.linspace(-20, 20, int(1e5));
y_1 = dist_1.pdf(x)
y_2 = dist_2.pdf(x)
y_3 = dist_3.pdf(x)

fig, ax = plt.subplots()
ax.plot(x, y_1)
ax.plot(x, y_2)
ax.plot(x, y_3)
plt.show()

fig, ax = plt.subplots()
plt.hist(sample_1, density=True, bins=40, ec='black')
plt.hist(sample_2, density=True, bins=40, ec='black')
plt.hist(sample_3, density=True, bins=40, ec='black')
plt.show()

absolute/relative error

Hi,

First of all, thanks for this amazing software, it is being very useful for my work!

My question regards the error (L2 norm) in the approximation of a quantity of interest by a polynomial expansion using the point collocation method. How to calculate it when the quantity of interest is a scalar? and what about a field? In neither of these cases an analytic expression is available for the qoi.

Is there any specific function in chaospy to do so (or in any known python library)?

Thanks for your help!

High order gPC estimators derived from non-standard distributions are unstable

Hello again,

I just want to share a piece of code and discuss the problem. I believe this is not actually a bug but rather a property of the method, but I am not completely sure.

Basically the code below is producing very high values for the coefficients in the polynomial. This, I believe, results in nans in the estimation of standard deviation. The problem is a function of order and the parameters of the original distribution.

from numpy import std
from chaospy import Normal, generate_quadrature, orth_ttr, fit_quadrature, Std


def f(x):
    return x

order = 6
rv = Normal(10, 0.1)
data = f(rv.sample(order+1))

nodes, weights = generate_quadrature(order, rv, rule='G')
P, norms = orth_ttr(order, rv, normed=False, retall=True)
u_hat = fit_quadrature(P, nodes, weights, f(nodes[0]), norms=norms)
print(Std(u_hat, rv))
print(std(data))

Can you make a suggestion on what may be causing the issue and how to verify it?

Can't construct reciprocal of uniform distribution

Hi,

I am trying the following code

viscosity_rv = 1.0 / Uniform(907, 1232)

But unfortunately generate_quadrature crashes with the following error:

/home/rsa/.local/lib/python2.7/site-packages/chaospy/dist/operators.py:699: RuntimeWarning: divide by zero encountered in reciprocal
  y = np.sign(x)*np.abs(x)**(1./num)
/home/rsa/.local/lib/python2.7/site-packages/chaospy/dist/operators.py:699: RuntimeWarning: invalid value encountered in multiply
  y = np.sign(x)*np.abs(x)**(1./num)

Handle categorical variables

Hello,

I would like to know if it was possible to use chaospy with discrete variables in the model.

Greetings

Distribution normalization and cp.Std magnitude

Hi,

I am trying to use chaospy for a 5d uncertainty analysis with point collocation method. My problem is that when I don't normalize the distribution to the reference values (don't divide by norm param), the standard deviations become very big, cp.Std(U_hat, dist), after fitting the polynomials with simulation results. cp.E(U_hat, dist) function works properly though. Maybe I don't quite understand the procedure, but shouldn't the method be independent of distribution normalizations?

I have copied the code here:

norm = 11602.
Ln = cp.Normal(mu=0.565992, sigma=0.0807134)
LTe = cp.Lognormal(mu=-1.46212, sigma=0.229387)
Ti0 = cp.Normal(mu=0.6, sigma=0.1816)
Tg0 = cp.Normal(mu=5801.0/norm, sigma=2106.9232/norm)
Tgb = cp.Uniform(300./norm,1160./norm)
dist = cp.J(Ln,LTe,Ti0,Tg0,Tgb)
order = 2
P = cp.orth_ttr(order, dist)
nodes = dist.sample(11, "M")
#solve for nodes here...
U_hat = cp.fit_regression(P, nodes, solves, rule="T")
mean = cp.E(U_hat, dist)
std = cp.Std(U_hat, dist)

-Payam

How to deal with high-dimensional system?

I tried to construct the PCE using spectral projection method like this:

distribution = cp.Iid(cp.Normal(0, 1), 39)
nodes, weights = cp.generate_quadrature(order = 1, domain = distribution)

My computer crashed directly, as there are too many parameters. My question is I want to construct a PCE, which has 39 uncertainty parameters. Can I use the spectral projection method? Are there any other way to using spectral projection method to reduce the samples?

I have construct PCE using the linear regression method with 4000 samples like this (P and Y are my samples and predictions):

distribution = cp.Iid(cp.Uniform(0, 1), 39)
polynomial_expansion = cp.orth_ttr(2, distribution)
foo_approx = cp.fit_regression(polynomial_expansion, P, Y)

and the relative accuracy is 1% which is acceptable for me.

I want to compare spectral projection and linear regression. Are they comparable at dealing with the high dimensional question, which has 39 model inputs?

Thank you very much.

Polynomial function which cuts off coefficients below certain boundary

I'm dealing with superstructure optimisation with surrogate models and use Chaospy for generating the surrogates. Unfortunately I receive also polynomials which have very low coefficients (order of 10^-5 to 10^-7) which cause problems for the optimisation step later on.

A function similar to chaospy.poly.collection.core.cutoff (http://chaospy.readthedocs.io/en/master/polylist.html) would be nice to neglect all terms with very low coefficients.

Joining input dist and qoi_dist

Hi,

I want to make contours of QoI-distribution versus input-distribution after obtaining QoI-distribution using PCM. I am using cp.J(dist, qoi_dist) to join the distributions and make multivariate distribution. However, it seems cp.J does not properly create co-variance matrix between two distributions (I believe qoi_dist should be a dependent distribution with respect to input dist). Is there an easy way to obtain the joined distributions co-variance matrix using U_hat and P?

I have copied the relevant part of my code here:

dist = cp.Lognormal(-0.0408,0.012)
nodes = dist.sample(nosamples,"M")
P = cp.orth_ttr(order, dist)
U_hat = cp.fit_regression(P, nodes, solves, rule="LS")
qoi_dist = cp.QoI_Dist(U_hat, dist)
joined_prob = cp.J(dist, qoi_dist)
x, y = np.meshgrid(x_range, y_range)
prob_pdf = joined_prob.pdf([x, y])
ax.contour(x, y, prob_pdf)

Thanks,
Payam

Truncated exponents of scipy.stats and chaospy give different values

I've looked at the documentations of both functions and tested it against the followign minimal example and I cannot find what's going wrong.

from chaospy import Truncexpon
from scipy.stats import truncexpon
from pickle import load

with open('params.pickle', 'rb') as f:
    p = load(f)[0]

x = 0.00085
print(truncexpon.pdf(x, b=p[0], loc=p[1], scale=p[2]))
print(Truncexpon(up=p[0], shift=p[1], scale=p[2]).pdf(x))

I believe these should be similar. What I am getting though is:

$python2 test.py
4518.666471
2535.1745292

params.zip

any possibility of using "chaospy.poly.base.Poly" object as python symbolic expression

Hi, Jonathan,

First, thank you for this nice package, it's light and helpful! I am currently exploring it with fun!

My question is about using "chaospy.poly.base.Poly" object as python symbolic expression. Is that possible? Since we can do the following:

print orthpol[3]
q1^2-2.0q1+0.986666666667

I can change the result to a string by doing str(orthpol[3]), which is convenient for further symbolic integration. However, the expression presented above is not a valid python expression. So is there any possibility that I can make use of the orthogonalized polynomials freely generated by chaospy? and how?

Thank you in advance for your time and consideration!

Cheers,
Chen

test failure during installing chaospy in mac OS version 10.12.5

Hello! Jonathan,

Thank you first of all about this program, it's very useful and helpful. Previously I install chaosy in linux machine successfully following the given steps. But things start fail when I try to install it in my macOS 10.12.5. Attachment is the stdout for running "python setup.py test". Could you provide any guidance about it?

Thank you,
Chen
chaospy_test_stdout.txt

former expons attribute of Poly class

I was checking out the examples from jp5000/PCE-4-WindEnergy and noticed the usage of the expons attribute of Poly classes. This works for chaospy v1.0 (to be specific, at this commit: b6e236c, or as in this fork: jp5000/chaospy), but not for the latest version. If I understand it correctly, the exponents are now in the keys attribute and are ordered corresponding to Poly.coeffs. In chaospy v1.0 the ordering of Poly.expons and Poly.keys is different. It also seems that the ordering of chaospy v1.0 Poly.expons is different compared to the ordering of chaospy v2.0+ Poly.keys. I don't know if a certain ordering has any real practical usefulness (I am a PCE noob).

And finally my question: was expons removed and replaced by keys for a specific reason, or is this a mistake?

Percentile function

Dr. Jonathf,

First of all, thank you for your time.
Currently I'm trying to obtain same results as a example research paper about Uncertainty Quantification. With the main goal to compare results and be sure that the code are okey. The Paper is called "Efficient Sampling for Non-Intrusive Polynomial Chaos Applications with Multiple Uncertain Input Variables" and the example appears in the page 9.

My issue is that all the results (mean,variance,standard desv,) are equal between the Paper and my code and thats well. But when I want to campare the Confidence Interval(CI) of 95% with the function cp.Perc from chaospy the result does not match with the Paper. However, I have implemented the Bootstrap method to compare two different approaches to obtain CI and with this approach the CI is so closely.

Do you know how it works the cp.Perc function or what do you think about why I don't have the same results with Perc, bootstrap and the paper?

If you want I can send you the code.

Thank you very much for your time, cooperation in advance.

Multivariate Kernel Density Estimation

Hello,

I am trying to use Chaospy to perform advanced sampling of a multivariate KDE generated via sm.nonparametric.KDEMultivariate. Unfortunately, I am not able to defined the KDE as a custom distribution in such a way that I am then able to call the sample operation. Could someone help me understand how I might be able to accomplish this, if it can be done at all?

Any help would be deeply appreciated!

import numpy as np
import statsmodels.api as sm
import chaospy as cp

data = np.array([[ 0.64206175,  0.49193947, -1.47253426],
       [-0.94203536, -1.16952506,  0.99483489],
       [-0.53587827, -0.29164457,  0.6460487 ],
       [ 0.43096139, -0.59982553, -0.01857683],
       [ 0.52384554, -0.54059551,  0.11118488],
       [-0.65789427,  0.15139668,  0.09203062],
       [ 0.2663031 , -0.4434035 , -0.16632009],
       [ 0.08475679, -0.0661517 , -0.34676888],
       [-0.79848711, -1.63556875,  0.52249545],
       [-1.18427302, -1.33508375,  1.12194074],
       [ 0.18819597, -0.84600592, -0.03214548]])

kde = sm.nonparametric.KDEMultivariate(data=data, var_type='ccc', bw='cv_ml')

cp_kde = cp.construct(cdf=lambda self, q: mode1_kde.cdf(q), 
                     bnd=lambda self: [(-1.82392181, 9.81120074),
                                         (-8.63429993, 3.11334478),
                                         ( -4.74632693, 2.08811487)])()

cp_kde.sample(4, 'L')
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-2-9984df9b1b05> in <module>()
     22                                          ( -4.74632693, 2.08811487)])()
     23 
---> 24 cp_kde.sample(4, 'L')

/home/<>/lib/python2.7/site-packages/chaospy/distributions/baseclass.pyc in sample(self, size, rule, antithetic, verbose, **kws)
    287         from . import sampler
    288         out = sampler.generator.generate_samples(
--> 289             order=size_, domain=self, rule=rule, antithetic=antithetic)
    290         try:
    291             out = out.reshape(shape)

/home/<>/lib/python2.7/site-packages/chaospy/distributions/sampler/generator.pyc in generate_samples(order, domain, rule, antithetic)
    137     assert rule in SAMPLERS, "rule not recognised"
    138     sampler = SAMPLERS[rule]
--> 139     x_data = trans(sampler(order=order, dim=dim))
    140 
    141     logger.debug("order: %d, dim: %d -> shape: %s", order, dim, x_data.shape)

/home/<>/lib/python2.7/site-packages/chaospy/distributions/baseclass.pyc in inv(self, q, maxiter, tol, verbose, **kws)
    207         """
    208         from . import rosenblatt
--> 209         return rosenblatt.inv(self, q, maxiter, tol, **kws)
    210 
    211     def pdf(self, x, step=1e-7, verbose=0):

/home/<>/lib/python2.7/site-packages/chaospy/distributions/rosenblatt.pyc in inv(dist, q, maxiter, tol, **kws)
     36 
     37     try:
---> 38         out, graph = dist.graph.run(q, "inv", maxiter=maxiter, tol=tol)
     39 
     40     except NotImplementedError:

/home/<>/lib/python2.7/site-packages/chaospy/distributions/graph/baseclass.pyc in run(self, x, mode, **meta)
    196     def run(self, x, mode, **meta):
    197         """Run through network to perform an operator."""
--> 198         return main.call(self, x, mode, **meta)
    199 
    200     def counting(self, dist, mode):

/home/<>/lib/python2.7/site-packages/chaospy/distributions/graph/main.pyc in call(self, x_data, mode, **meta)
     59         out = self(x_data)
     60     else:
---> 61         out = self(x_data, self.root)
     62 
     63     # if mode == "ttr":

/home/<>/lib/python2.7/site-packages/chaospy/distributions/graph/baseclass.pyc in __call__(self, *args, **kwargs)
    158 
    159     def __call__(self, *args, **kwargs):
--> 160         return self._call(self, *args, **kwargs)
    161 
    162     def __str__(self):

/home/<>/lib/python2.7/site-packages/chaospy/distributions/graph/calling/inv.pyc in inv_call(self, q_data, dist)
     33     else:
     34         out, _, _ = approx.ppf(
---> 35             dist, q_data, self, retall=1, **self.meta)
     36     graph.add_node(dist, key=out)
     37 

/home/<>/lib/python2.7/site-packages/chaospy/distributions/approx.pyc in ppf(dist, q, G, maxiter, tol, retall, verbose)
    249 
    250     if not dist.advance:
--> 251         dist.prm, prm = G.K.build(), dist.prm
    252         out = inv(dist, q, maxiter, tol, retall, verbose)
    253         dist.prm = prm

AttributeError: Graph instance has no attribute 'K'

Name `pcd` is not defined.

The (docstring) examples for cp.orth_svd, cp.orth_pcd, and cp.orth_hybrid don't work for me.

In [1]: import chaospy as cp
In [2]: Z = cp.Normal()
In [3]: print cp.orth_pcd(2, Z)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-3-078656d1e4da> in <module>()
----> 1 print cp.orth_pcd(2, Z)
/usr/local/lib/python2.7/dist-packages/chaospy/orthogonal.pyc in orth_pcd(order, dist, eps, normed,     **kws)
    343     N = len(basis)
    344 
--> 345     L, P = pcd(C, approx=1, pivot=1, tol=eps)
    346     Li = np.dot(P, np.linalg.inv(L.T))
    347 
NameError: global name 'pcd' is not defined

Are these deprecated? From what package did that pcd use to come?

Additionally, the example for orth_hybrid refers to orth_chol instead.

New truncation schemes

Hello there,

I came across chaospy recently and it is really helpful. Just wanted to mention that one could easily improve the scalability of the regression approaches towards higher input dimensionality by limiting the number of polynomial basis functions a priori. For example, one could rely on some hyperbolic cross truncation scheme, or limit the number of interactions between the variables. Such features would make the software much more powerful.

Regards
Ben

Using partial derivatives of model response in polynomial expansion

I see that in your doctoral thesis you mentioned that -

To be able to incorporate partial derivatives of the response, the partial derivative of the polynomial expansion must be available as well. In both Turns and Chaospy, the derivative of a polynomial can be generated easily.

I am using least square regression method to obtain the coefficients in polynomial chaos expansion, therefore, my objective is to obtain a set of linear algebraic equations of the form A.x = B. The coefficient matrix A will contain both shape functions and their partial derivatives evaluated at regression points.

Could you please elaborate a bit on how I can obtain the partial derivative of shape functions if I want to use the gradient information of model response.

distributions: add/mult and test coverage

Is it reasonable to expect dist.backend.Dist + 1 works, but 1 + dist.backend.Dist fails? I don't have any experience with overloading operators, but I would assume both left and right additions should be possible? Or am I missing something else here?

>>> import chaospy as cp
>>> cp.Uniform() + 1
<chaospy.dist.operators.Add at 0x7fb40405e400> 
# but this fails
>>> 1 + cp.Uniform()
Traceback (most recent call last):
  File "chaospy/src/chaospy/dist/backend.py", line 372, in __radd__
    return add(X, self)
  File "chaospy/src/chaospy/dist/operators.py", line 134, in add
    return Add(A=A, B=B)
  File "chaospy/src/chaospy/dist/operators.py", line 34, in __init__
    _length=length, _advance=True)
  File "chaospy/src/chaospy/dist/backend.py", line 115, in __init__
    self.dependencies = self.G.run(self.length, "dep")[0]
  File "chaospy/src/chaospy/dist/graph.py", line 266, in run
    out = self(self.root)
  File "chaospy/src/chaospy/dist/graph.py", line 158, in __call__
    return self._call(*args, **kwargs)
  File "chaospy/src/chaospy/dist/graph.py", line 526, in dep_call
    out = dist._dep(self)
  File "chaospy/src/chaospy/dist/backend.py", line 345, in _dep
    if len(self)==1:
TypeError: only integer arrays with one element can be converted to an index

If that would help, I can go and create a bunch of tests for a few things like adding/multiplying distributions, and sent it to you as a PR. Do you think creating a tests/test_dist.py file would be the right way to go?

Test local build - 1 Test failed (chaospy.poly.collection.core.cutoff)

Hello Jonathan,
I have installed chaospy and run the test as recommend. I get 113 passed tests but one failed test. I assume this is not the output that I'm supposed to get. Do you have any suggestions what I could try or where I could find more info?
Thank you very much for your help!
Marion

================================== FAILURES ===================================
________________ [doctest] chaospy.poly.collection.core.cutoff ________________
095 Defaults to 0.
096 high (int) : The upper threshold for the cutoff range.
097
098 Returns:
099 (Poly) : The same as P, except that all terms that have a order not
100 within the bound low<=order<high are removed.
101
102 Examples:
103 >>> poly = cp.prange(4, 1) + cp.prange(4, 2)[::-1]
104 >>> print(poly)
Expected:
[q1^3+1, q1^2+q0, q0^2+q1, q0^3+1]
Got:
[q1^3+1, q0+q1^2, q0^2+q1, q0^3+1]

D:\Python\chaospy\chaospy\src\chaospy\poly\collection\core.py:104: DocTestFailure
==================== 1 failed, 113 passed in 9.14 seconds =====================

License

Hi,

I was creating a conda-build file for chaospy and noticed there is no license file. It's stated in setup.py that it is a BSD license or a MIT compatible license, while most of the libraries included are GPL. I think it would be good to have an explicit license file for chaospy, as well as to be clear which BSD or MIT compatible license actually applies.

Best,
Jacob

Smolyak grids diverge for higher polynoamial orders

This is going to be just a quick reality check as I may be just using the method incorrectly. I would appreciate a comment on this. I am running the following code

def cp_pseudospectral(max_order=10, sparse=False):
    '''Test of the pseudospectral method'''
    errors = zeros(max_order)

    for order in range(max_order):
        P, norms = orth_ttr(order, dist, retall=True)
        nodes, weights = generate_quadrature(
            2 * order, dist, sparse=sparse, rule='G')
        solves = [u(t, s[0], s[1]) for s in nodes.T]
        u_hat = fit_quadrature(P, nodes, weights, solves, norms=norms)
        errors[order] = L2norm_in_random_space2D(u, u_hat)
    return errors

where u is the exponent function. I am getting this behavior

convergence-ps_grids

Smolyak can stay on the Tensor grid depending on the quadrature order I select, but it diverges in the end. Is that an expected behavior?

Typo in README Installation?

Hi, may I ask if there is a typo in the installation readme?

git clone [email protected]:jonathf/chaospy.git
cd chaospy
pip install -r requirements.txt
python setupy.py install

I wonder if the last line should be "python setup.py install" instead.
Thank you so much for your work :) I really appreciate it!

Is it possible to directly evaluate the polynomial surrogate?

I'm in a situation where I want to be able to directly evaluate the polynomial surrogate. For example, I want to evaluate foo_approx at specified points in the uncertain space. Can this be done with chaospy?

import chaospy as cp
import numpy as np

def foo(coord, param):
   return param[0] * np.sin(coord) + param[1] * np.sin(coord * 30)

coord = np.linspace(0, 10, 200)
distribution = cp.J(
     cp.Normal(1, 0.6),
     cp.Normal(2, 0.2)
 )

polynomial_expansion = cp.orth_ttr(3, distribution)

samples = distribution.sample(50)
evals = [foo(coord, sample) for sample in samples.T]

foo_approx = cp.fit_regression(polynomial_expansion, samples, evals, rule='T')

# Now I want to evaluate the polynomial surrogate at param = [1.5, 1.5]

Polynomial representations and random variables

Hi,

Many thanks for contributing this package. I've done a few basic examples and I like your API a lot. At the moment I am trying to construct a random variable based on the polynomial representation. Is there a way to do it in ChaosPy already?

For instance, if I have a code which looks like this

a = Uniform(a_low, a_high)
I = Uniform(I_low, I_high)
dist = J(a, I)

P, norms = orth_ttr(order, dist, retall=True)
nodes, weights = generate_quadrature(
       order, dist, sparse=False, rule='G')
solves = [u(t, s[0], s[1]) for s in nodes.T]
u_hat = fit_quadrature(P, nodes, weights, solves, norms=norms)

I would like to make a random variable based on u_hat. If I just use algebraic operations on a and I get an expected behaviour of a transformed RV, but how to perform this with the u_hat representation?

Obviously, I can print u_hat and see it in terms of q0 and q1, but I would like to make an actual substitution and see in terms of a and I and then for instance plot a pdf.

Please let me know if it's there already. Otherwise, I am happy to work on a pull request to implement it.

Extract the Fourier Coefficients

First of all, thanks you for your time.

My issue is how to determine or if exist a function to determine the Fourier coeffcients C_n from a Surrogate Model created with fit.quadrature function.

For example:

Surrogate Model= 0.13q0+0.1q1-10
Hermite Polynomial =[1,q1-100,q0-25.13]

Y=QoI=C0Psi_0 + C1Psi_1 + C2Psi_2 = C0(1)+ C1*(q1-100) + C2*(q0-25.13)

Issue obtaining orthogonal polynomials for multivariate function

Hi Jonathan,

I receive the following error message:

MemoryError: Too large sets

when I run the following script:

distribution = cp.J(cp.Normal(-1,1), cp.Weibull(1.1,90,-180), cp.Uniform(0,10), cp.Uniform(0,10))
orths, norms = cp.orth_ttr(2, distribution, normed=True, retall=True, cross_truncation=1.5)

I figure that it is the Weibull distribution that's causing the error.
The above script is only a sample, for the problem at my hand I have many input variables having Weibull distribution.
It would be very helpful if you can suggest a work around for the problem.

Thank you.
Kind Regards,
Rahul

Bug with truncation on a dependent distribution

Hi, I would like to truncate a dependent distribution, but the following code doesn't work.

import numpy as np
import chaospy as cp
import matplotlib.pyplot as plt

dist_1 = cp.Uniform(1, 2)
dist_2 = cp.trunk(cp.Normal(5.0,dist_1), 6.0)

dist_J = cp.J(dist_1, dist_2)
sample_J = dist_J.sample(int(1e6))

fig, ax = plt.subplots()
plt.hist(sample_J[0,:], density=True, bins=4, ec='black')
plt.hist(sample_J[1,:], density=True, bins=40, ec='black', alpha=0.7)
plt.show()

Gumbel copula and Gumbel distribution have the same name and collide

Hello Chaospy team. Great work with this library.

Gumbel copula and Gumbel distribution have the same name and collide:

dist_Q = cp.Gumbel(dist, theta=2.)

TypeError: Gumbel() got an unexpected keyword argument 'theta'

cp.Gumbel?

cp.Gumbel(scale=1, loc=0)

dist_Q = cp.Copula(dist, cp.gumbel(len(dist), theta=2.) )

works!!

Sobol sensitivity indices

Dear Jonathf,

I am trying to use the sensitivity indices functions from ChaosPy in a cardiac mechanics problem. Four parameters was considered as uncertain and a quantity of interest with a low variance was analysed. I am expecting that the main sensitivity indices would be close to zero for all parameters, due to the low variance (Var = 5.0549893493e-26). However, the main sensitivity index for the second parameter was 0.96500090321 when I used the function Sens_m.
Then I tried to compute this index manually using the functions Var and E_cod. I found the same value for the index, but I could see that the conditional variance was Cond_Var = 5.28776503722e-28 and variance was Var = 5.0549893493e-26. So, when I computed the index Si = Cond_Var/Var, the result is 0.96500090321.

Shouldn't this sensitivity index be zero? If yes, how can I fix this error?

Seeing your function:

def Sens_m(poly, dist, **kws):
    """
    Variance-based decomposition
    AKA Sobol' indices
    First order sensitivity indices
    """
    dim = len(dist)
    if poly.dim<dim:
        poly = chaospy.poly.setdim(poly, len(dist))

    zero = [0]*dim
    out = np.zeros((dim,) + poly.shape)
    V = Var(poly, dist, **kws)
    for i in range(dim):
        zero[i] = 1
        **out[i] = Var(E_cond(poly, zero, dist, **kws), dist, **kws)/(V+(V == 0))*(V != 0)**
        zero[i] = 0
    return out

I thought in changing the bold line to:
out[i] = Var(E_cond(poly, zero, dist, **kws), dist, **kws)/(V+(V < 1e-16))*(V > 1e-16)

What do you think?

QoI example broken

@flo2k I am working on a refactor of the code, I am noticing that your code example for QoI_Dist is incomplete:

Examples:
    >>> cp.seed(1000)
    >>> x = cp.variable(1)
    >>> poly = cp.Poly([x])
    >>> qoi_dist = cp.QoI_Dist(poly, dist)
    >>> print(qoi_dist[0].pdf([-0.75, 0., 0.75]))
    [  1.27794383e-123   3.99317083e+000   1.16692607e-100]

What should dist be defined as to complete this example?

folded normal distribution

Hi again,

Could you please give some tips about how to build a folded normal distribution?
I was expecting some parameters like mu,sigma and the location of the fold, however there are not
such arguments in this function.

Thanks!

Theory Guide - SerialLayers function

In 1.3 Tutorial of the Theory Guide there is a function called SerialLayers that I cannot find any reference or documentation for. Can you add this to the example python code?

Total effect Sobol Indices in Monte Carlo approach

First of all, thank you for your time.
I have a doubt with how to make Sobol Sensitivity analysis with Monte Carlo implementation because the Sobol function needs a polynomial surrogate model.

I have for example:

  • 3 random parameters with Normal distribution.
  • 100 simulations of our QoI with above input parameters.
  • QoI=[QoI_1,QoI_2,...,QoI_100]
  • How to make Sobol Indices of this 3 random inputs with respect QoI?

thank you very much!!!

convenient interface to the cdf of given distribution

I am trying to figure out how to get the cdf from a given distribution. For a pdf the interface is straight forward:

import numpy as np
import chaospy as cp
rayleigh = cp.Rayleigh(scale=11)
ws = np.arange(0, 25, 0.1)
rayleigh.pdf(ws)

However, a similarly convenient interface for the cdf is not available (or is it?). Is this a deliberate choice, or is it something that remains to be implemented? Is there an obvious alternative that would give me the cdf for a given range of values that I am missing? I am a little confused here because I can clearly see that the cdf for each distribution is defined in the _cdf method in chaospy.dist.cores.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.