Git Product home page Git Product logo

jumpdiff's People

Contributors

lrydin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

jumpdiff's Issues

Reproducing the results in the exampels fails

Thank you for this great work!

As a mechanical engineer, I currently do not have a strong mathematical background regarding the formulation of stochastic processes, but I would like to use this package to perform a Monte Carlo simulation for energy system analysis, e.g., by sampling projections of the development of energy prices. I tried to reproduce the results from the code snippets given by the examples in the documentation and was not able to achieve the same results. I have done the following:

Firstly, I have simulated a jump diffusion process using the same code given by your documentation:

# integration time and time sampling
t_final = 10000
delta_t = 0.001

# A drift function
def a(x):
    return -0.5*x

# and a (constant) diffusion term
def b(x):
    return 0.75

# Now define a jump amplitude and rate
xi = 2.5
lamb = 1.75

# and simply call the integration function
X = jd.jd_process(t_final, delta_t, a=a, b=b, xi=xi, lamb=lamb)

The plot of X is given below, and so far, it looks similar to the results given in the examples.

Then I tried the ability of the package to estimate the given parameters, and the results started to become a bit confusing. I used the following code according to the documentation:

edges, moments = jd.moments(timeseries = X)

# we want the first power, so we need 'moments[1,...]'
plt.plot(edges, moments[1,...], label='D1')
plt.plot(edges, moments[2,...], label='D2')

plt.xlabel('x') 
plt.ylabel('D_m(x)')  
plt.legend()

plt.xlim(-5, 5) 

plt.show()

The results show the first two Kramers-Moyal coefficients. In comparison with the results in the documentation for the same simulation, the values of the coefficients seem to be completely different:

grafik

grafik

I have also looked into your paper. Figure 2 shows the estimation results for the coefficients based on the same simulation. Here, the second Kramers-Moyal coefficient D2 shows again a different value (shown below). In this case, it seems to directly correspond to the constant value b that was chosen in the model parametrization, whereas the former results are stating that the resulting curve is also dependent on the parameters of the jump process.

grafik

Could you clarify my observations a bit and provide some more guidance on how I can retrieve the final parameters of the jump diffusion process based on the Kramers-Moyal coefficients? There might be a huge misunderstanding on my side since I am currently not very familiar with the deep theory behind the package.

Thank you so far!

Mistake in `jd_process.py`

Thank you for your great effort in the jumpdiff, it is the first package in both simulation and estimation for the research in SDEwJ with Python language. But I find some mistakes in your simulation parts.

In the code file jd_process.py, you directly treat the jump as the dJ[i] * np.random.normal(loc = 0, scale = np.sqrt(xi)) for both Euler and Milstein schemes, which is a wrong simulation method. The correct method is treat the diffusion as the sum of d[j] iid samples that from normal distribution. The data generated with wrong data will leads to the estimation bias of your program. The example you used in the introduction in github and publication has very small lambda and the dJ[i] is mostly equal to 1, so the data generated with wrong simulation method is almost the same with the right one. If you change the lambda to 50, the estimation will crash with big bias of about 30, which is showed the code file attached. After I change the simulation data using the correct simulation method, the estimation task is worked for both small and big lambda.

I think your estimation method is good without any problem since I have tested other sample with the right simulation code, but you should correct the mistakes in jd_process.py.

Wrong Code in jd_process.py:

# Integration, either Euler
if solver == 'Euler':
    for i in range(1, length):
        X[i] = X[i-1] + a(X[i-1]) * delta_t + b(X[i-1]) * dw[i]
        if dJ[i] > 0.:
            X[i] += dJ[i] * np.random.normal(loc = 0, scale = np.sqrt(xi))

if solver == 'Milstein':
    # Generate corrective terms of the Milstein integration method
    dw_2 = (dw**2 - delta_t) * 0.5

    for i in range(1, length):
        X[i] = X[i-1] + a(X[i-1]) * delta_t + b(X[i-1]) * dw[i] \
                + b(X[i-1]) * b_prime(X[i-1]) * dw_2[i]
        if dJ[i] > 0.:
            X[i] += dJ[i] * np.random.normal(loc = 0, scale = np.sqrt(xi))

Correct Code for jd_process.py:

# Integration, either Euler
if solver == 'Euler':
    for i in range(1, length):
        X[i] = X[i-1] + a(X[i-1]) * delta_t + b(X[i-1]) * dw[i]
        if dJ[i] > 0.:
            for j in range(int(dJ[i])):
                X[i] += np.random.normal(loc = 0, scale = np.sqrt(xi))

if solver == 'Milstein':
    # Generate corrective terms of the Milstein integration method
    dw_2 = (dw**2 - delta_t) * 0.5

    for i in range(1, length):
        X[i] = X[i-1] + a(X[i-1]) * delta_t + b(X[i-1]) * dw[i] \
                + b(X[i-1]) * b_prime(X[i-1]) * dw_2[i]
        if dJ[i] > 0.:
            for j in range(int(dJ[i])):
                X[i] += np.random.normal(loc = 0, scale = np.sqrt(xi))

Simulation and Estimation wth big lambda:

# -*- coding: utf-8 -*-
"""
Created on Fri Feb 10 15:19:29 2023

@author: JChonpca_Huang
"""
import jumpdiff as jd
import matplotlib.pyplot as plt

# integration time and time sampling
t_final = 1000
delta_t = 0.001

# A drift function
def a(x):
    return -0.5*x

# and a (constant) diffusion term
def b(x):
    return 0.75

# Now define a jump amplitude and rate

xi = 2.5

lamb = 50


# and simply call the integration function
X = jd.jd_process(t_final, delta_t, a=a, b=b, xi=xi, lamb=lamb)

plt.plot(X)
plt.show()

edges, moments = jd.moments(timeseries = X)


# we want the first power, so we need 'moments[1,...]'
plt.plot(edges, moments[1,...])
plt.show()

plt.plot(edges, moments[2,...])
plt.show()

# first estimate the jump amplitude
xi_est = jd.jump_amplitude(moments = moments)

print(xi_est)

# and now estimated the jump rate
lamb_est = jd.jump_rate(moments = moments)

print(lamb_est/delta_t)

negative lambda and others

This is a great package. And I have some questions.
Recently I come across the KM perspective to estimate levy triplet by virtue of the book of Mr Tabar published in 2019. Statisticians, especially French scholars, have done a lot on it. However, real time series are traces of dynamic processes ruled by some unknown laws. Methods that lack the support of some natural law governance yet only focus on data may not be the right road. And the KM perspective shows me a direction.
I tried your package last night with some real time series, but many of them are negative lambda. Besides, I want to use the estimated lambda and xi to generate surrogate process to compare with the original time series (I also want to get drift and diffusion from the moments but moments are not scalar,thus I calculated from the original), but the generated X has many nan. Could you give some guidance?
And the KM perspective is a nonparametric method, which may have some similarities with the statistical methods worked by, e.g. Fabienne Comte or Jean Jacod. Could you show some details on them?
Finally neural network is nonparametric method to approximate the unknown distribution by stacked neural layers, do you have some plan to try them?

Many thanks

Example notation

Hi @LRydin, thank you for this great package. I have a question regarding the notation in the definition of the jump term. The parameter xi is referred to as the jump amplitude, which is however normally distributed with variance sigma_xi. Is the xi parameter as set in the example actually the standard deviation of the jump amplitude?

Furthermore, is there a way to set the initial condition, e.g., X_0 = 0?

Thanks again!
Kyriakos

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.