ptb-m4d / pydynamic Goto Github PK
View Code? Open in Web Editor NEWPython library for the analysis of dynamic measurements
Home Page: https://ptb-m4d.github.io/PyDynamic/
License: GNU Lesser General Public License v3.0
Python library for the analysis of dynamic measurements
Home Page: https://ptb-m4d.github.io/PyDynamic/
License: GNU Lesser General Public License v3.0
After inserting the class misc.testsignals.corr_noise into the public interface of the module we should add proper documentation.
The optional parameter verbose
is not included in the according docstring. Although its meaning is kind of obvious, we should include an explanation for the sake of completeness.
The threshold for test_sos_freqresp and test_sos_phys2filter seem to be to strict, for the tests fail from time to time. We should adapt the assertion threshold to allow the test to pass all the time.
See here for one example test execution which failed and succeded on reexecution:
https://circleci.com/gh/PTB-PSt1/PyDynamic/80#artifacts/containers/0
Some digital sensors generate their own sample clock. This clock fluctuates considerably which leads to not equidistant sample distances. FFT, DFT and wavelet transformations need equidistant samples, so it would be nice if PyDynamic could implement interpolation methods with error poropagation.
Beste regards Benedikt
In testsignals.py we sometimes add np.random.randn(len(time)) * noise
where noise
is a float and sometimes we add np.random.randn(len(x))*noise**2
. It appears to me, that we should harmonize this, to avoid irritation.
@mgrub What do you think?
In the source of
PyDynamic/uncertainty/propagate_DFT.py
it states in the function DFT2AmpPhase()
P: np.ndarray
vector of phase values
It would be helpful to know the possible interval (-pi:pi) or (0:2pi) or (-180:180)
whatsoever. I confess, that I didn't check the web documentation for that information.
By the Way "DFT2AmpPhase" is in fact a "DFT2MagPhase".
This is the very common confusion of amplitude (oszillation/sine) and magnitude (vector/complex number/transfer function).
But I fear it would be a mess to change that now.
PyDynamic installed/updated via "pip" on 18.10.2016:
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:594: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
rH, iH = H[:N/2+1], H[N/2+1:]
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:595: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
rY, iY = Y[:N/2+1], Y[N/2+1:]
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:597: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
Yc = Y[:N/2+1] + 1j_Y[N/2+1:]
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:598: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
Hc = H[:N/2+1] + 1j_H[N/2+1:]
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:68: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
v1 = V[:N]; v2 = V[N:]
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:69: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
w1 = W[:N]; w2 = W[N:]
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:83: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
A = M[:N,:N]
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:84: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
B = M[:N,N:]
/usr/local/lib/python3.5/dist-packages/PyDynamic/uncertainty/propagate_DFT.py:85: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
D = M[N:,N:]
The code contains still a lot TODO statements. We should transfer these into this Issues section.
In uncertainty.propagate_filter.FIRuncFilter add handling of covariance propagation according to http://dx.doi.org/10.1088/0026-1394/45/4/013 .
Idea: Write the code in a way, that assumes sigma_noise
to be of type numpy.ndarray
. Otherwise build ndarray from float and proceed.
Issue generated from in-code-TODO, see #41
We should think about adding wt=None
in the implementation of LSFIR_uncMC or removing it from the signature. In the docstring it is 'announced' as wt: np.ndarray of shape (2M,), optional vector of weights
.
The optional parameter rate
is not included in the according docstring. We should include an explanation for the sake of completeness.
if __name__ == "__main__":
part of signals.py
produces different errors on execution. We should find out the reason for those errors, resolve them and include the residual code parts into test suite.
We should update the image and think about a workflow for doing so in the future (for example when we rename the modules while working on issue #31.
This is a (nasty) side effect of the introduction of the mask parameter for GUM_DFT.
For example:
If the DC (0 Hz) component of a signal was removed by masking it "0" in GUM_DFT then the argument to a subsequent GUM_iDFT is lacking the first real and first imaginary component without knowing about that.
This leads to an erroneous (distorted) output.
The "0"-masked components could be zero-padded in GUM_iDFT for one possible solution.
The documentation does not contain the module tools
although it contains important parts of the software. We should include it asap.
Some routines in the package make use of
np.random.multivariate_normal(...)
which samples from a multivariate normal distribution based on information
given by the covariance matrix in the arguments to the call.
In many cases of dynamic measurements long time series of combinations of time series
lead to very large but (very) sparse covariance matrices. These cannot be handled efficiently
by np.random.multivariate_normal().
A dedicated function fit to deal efficiently with scipy.sparse-matrices would be much appreciated!
In uncertainty.propagate_MonteCarlo insert a new function, that implements the UMC algorithm that is published in http://dx.doi.org/10.1088/0026-1394/49/3/401
Issue generated from in-code-TODO, see #41
Become sure about what is needed and what are uneccessary leftovers from previous builds and delete them.
The following function definition in PyDynamic-1.2.61\PyDynamic\uncertainty\propagate_DFT.py use slightly different orders of the parameters (Y,X,UY,UX vs. Y,UY,X,UX). This need special attention for the user and might be a pitfall when changing from multiplication to division.
def DFT_transferfunction(X, Y, UX, UY):
def DFT_deconv(H, Y, UH, UY):
def DFT_multiply(Y, UY, F, UF=None):
Maybe introduce an additional function DFT_divide(Y, UY, X, UX)
for calculation of Y/X.
Additionaly, for me, the order of parameters “numerator followed by denominator” is more intuitive. In DFT_deconv and DFT_transferfunction the order is “denominator followed by numerator”.
In uncertainty.propagate_filter.FIRuncFilter add possibility to change "color" of noise/uncertainty.
Remarks:
Issue generated from in-code-TODO, see #41
For long data sets the size of the covariance matrix calculated in GUM_DFT(...) gets prohibitiv.
In many cases however, the analysis is only interested in a (small) subset of the complete frequency range.
Hence it would be beneficial to have a means to select a subset of the frequencies in a parameter of GUM_DFT with the covariance matrix only calculated for that subset.
The subset could be selected by
Code analysis showed that the variable Umu in fit_transfer.py is not used.
For consistency reasons it would be helpful to have a function
uncertainty.propagate_DFT.GUM_DFTfreq(N,dt)
which returns a vector of frequencies related to the result of GUM_DFT(...)
See also numpy.fftfreq(...) or scipy.fftpack.fftfreq(...)
It would improve debugging efficiency in application development, if warnings from
PyDynamic subroutines would start with the name of the respective routine.
Like:
"DFT2AmpPhase: Some amplitude values are below the defined threshold "
instead of just
"Some amplitude values are below the defined threshold"
Line F, UF, CX = GUM_DFT(x[m,:], Ux[m], CxCos, CxSin, returnC=True) works when there is no CX.
Lines:
A[m,:] = A_m[selector]
P[m,:] = P_m[selector]
UAP[m,:ns] = UAP_m.data[0][:N][selector]
UAP[m,ns:2ns] = UAP_m.data[1][UAP_m.offsets[1]:2N+UAP_m.offsets[1]][selector]
UAP[m, 2*ns:] = UAP_m.data[0][N:][selector]
Error: Subdimensional views are not implemented.
"requirements.txt" states, that "scipy=1.0.0": https://docs.scipy.org/doc/scipy-1.0.0/reference/generated/scipy.misc.comb.html#scipy.misc.comb
In order to be able to use later versions of scipy, this can be fixed by exchanging the import line to:
from scipy.special import comb
This will cause no incompatibilities, as the required version of scipy already only provides backwards-comptability-links to scipy.misc.comb
.
To know better about compatibility issues, we should include the installation for different versions like in the example for deployment.
In uncertainty.propagate_dft
the calculation of uf
is missing an np.diag
for the case that the input uncertainty is a full covariance matrix.
To avoid unambiguous method naming we will combine all methods from identification and devonvolution in one new module model_estimation including the fit_filter.py methods with same namings in both modules. That requires us to rename some methods out of devonvolution.fit_filter.py. Because of the following incompatibility with previous versions of PyDynamic we need to insert a deprecation warning into the next minor release and inform users about the upcoming change.
The function
DFT_deconv (H, Y, UH, UY)
can be used to calculate the transfer function H(f) in the frequency domain by using
H,UH = DFT_deconv (Y, X, UY, UX).
Although this is in many cases a primary question when starting from calibration measurements
it is a "well hidden" feature so far.
This feature fills an important gap and should be exposed in the documentation.
It may even be worthwhile to create an additional function pointer/hook
DFT_TransferFunc = DFT_deconv
in order to make code more readable, that makes use of this feature.
So far, the UMC_generic function expects samples with only one dimension (aka a vector). This should be generalized for even more generic use-cases.
If possible, the histogram-feature should be expanded to the multi-dim case.
forget it.
In uncertainty.propagate_filter.IIRuncFilter allow to have zero uncertainty of filter-parameters.
Remark:
Uy
takes full account of changes in Uab
, as published in equation 12 of the associated paperUab
in IIR-example show very good agreement between IIRuncFilter and Monte-Carlo-method - independent of Uab
being zero or not.Issue generated from in-code-TODO, see #41
The parameter b
is denoted as filter numerator coefficients in the according docstring, but not used at all during the implementation. Either we should include it in the implementation or remove it from the signature.
In the comments explaining the parameters of the function
fit_filter( ... )
it reads
justFit: boolean, when true stabilization is carried out.
To my understanding it should read (inversely):
justFit: boolean, when true stabilization is not carried out.
We removed a parameter wt
from the following method signature but the parameter should actually be used in the implementation. We should therefore reintroduce the parameter to the signature and implement the weights.
The docstring stated:
wt: np.ndarray of shape (2M,), optional vector of weights
The call:
p,up = fit_sos((data['f']), (data['h']), UH=(data['uh']))
leads to errors in np.linalg.solve (c.f. below).
type f: float64
shape f: (161,)
type h: float64
shape h: (322,)
type uh: float64
shape uh: (322, 322)
h,uh are results of DFT_deconv(...)
f is result of GUM_DFTfreq()
Messages:
/home/bruns01/PTB/Python/Stoszauswertung_tdms/fit_transfer.py:60: RuntimeWarning: covariance is not positive-semidefinite.
HRI = np.random.multivariate_normal(H, UH, runs)
/home/bruns01/PTB/Python/Stoszauswertung_tdms/fit_transfer.py:105: RuntimeWarning: divide by zero encountered in true_divide
iri = np.r_[np.real(1 / H), np.imag(1 / H)]
Traceback (most recent call last):
File "/home/bruns01/PTB/Python/Stoszauswertung_tdms/Shock_PyDynamic.py", line 177, in Identify_total
self.Identify_singles()
File "/home/bruns01/PTB/Python/Stoszauswertung_tdms/Shock_PyDynamic.py", line 262, in Identify_singles
p,up = fit_sos((data['f']), (data['h']), UH=(data['uh']))
File "/home/bruns01/PTB/Python/Stoszauswertung_tdms/fit_transfer.py", line 112, in fit_sos
XVy = (X.T).dot(np.linalg.solve(W, iri))
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 384, in solve
r = gufunc(a, b, signature=signature, extobj=extobj)
ValueError: solve1: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (m,m),(m)->(m) (size 644 is different from 322)
The optional parameter MCruns
is not included in the according docstring. Although its meaning is kind of obvious, we should include an explanation for the sake of completeness.
As an example:
DFTdeconv(...) returns the (mathematically) complex values of the result X as an np.array of dtype float64
generated by hstack-ing the real and imaginary parts.
fit_transfer.fit_sos(...) in turn expects a complex input "H" (dtype=complex)
(c.f. devel branch)
The whole package should use a consistent transfer (input/output) of complex valued vectors or matrices.
Preferably by using dtype complex where ever appropriate, to avoid any confusion.
The method GUM_DFT
returns two numpy arrays. The docstring of that method should include an information for the interpretation of these arrays. That is, add information where real and where imaginary parts are stored.
The same should be done for all methods in that module.
We have to choose a package manager (conda or pip) to maintain the dependencies and then setup the according mechanism in the repo environment.yml
or requirements.txt
.
There is this PyDynamic Twitter account which we either should revive or remove.
Check which functions of PyDynamic/misc/tools.py are not used (importing is not necessarily using) anywhere else in the PyDynamic-repo.
Candidate function for removal are:
Issue generated from in-code-TODOs, see #41
Apparently one constructor input parameter of the class Normal_ZeroCorr
is ambiguosly referenced. In the signature it is called mean, whereas in the docstring it is called loc. Worse is a constructor call in line 114 of the file, where is referenced with loc. We need to rename the call and the docstring and see if there are any other calls with loc.
The method GUM_iDFT has a parameter Nx=... which is meant to represent the requested number of samples in the time domain. However, in the current implementation this is limited by the number of frequencies available. Hence, upsampling of a band limited signal is not possible for now.
Subsequently applied scipy.signal.resample() calls will again operate in the frequency domain using fft and ifft.
Therefore it seems reasonable to include an upsampling option in GUM_iDFT.
The class corr_noise
is part of the testsignals
module inside misc
package. It is mentioned in PyDynamic's assignment of __all__
but not in the package level assignment. It should be mentioned in all relevant occuring assignments of __all__
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.