Git Product home page Git Product logo

timegan's People

Contributors

alatas avatar jsyoon0823 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

timegan's Issues

Reason for applying MinMax twice

Hi,
I've been trying to replicate your model in Pytorch. I had a small question that arose as I was going through your code.
Suppose I loaded the stocks data using real_data_loading(). I noticed that the ori_data variable there is MinMax scaled.
Now, while calling timegan() you seem to use Minmax scaling on the already scaled data.
Am I interpreting wrongly or is this how the code works?
If this is the way, could you tell me why we would need to scale the data twice?

Query regarding loss function

Thank you for providing the source code for the paper.
I have the following queries:

  1. Which loss function corresponds to the reconstruction loss as in the paper? (I suspect G_loss_S)
  2. The G_loss is not used, where does it correspond in the paper?

Supervised Loss

First of all, great work with TimeGAN! Really an interesting paper.

I have a question regarding the supervised loss, computed as follows:
G_loss_S = tf.losses.mean_squared_error(H[:,1:,:], H_hat_supervise[:,:-1,:]) .

I understand the purpose of the supervisor. However, I am confused by the fact that the generator is supposed to be trained on G_loss_S in the second step of training TimeGAN (you call it 'Training only with supervised loss'), although it does not contribute to the computation and consequently no gradients are available. Why is that?

Discriminative score inconsitence issue

Hi!

I am trying to reproduce your model and I saw someone tried to convert the code to the tf2 version and you accept the PR, but you revert it back for some reasons. Could I know why you do that?

The visualization metrics for each version (tf1, tf2, PyTorch) are matched. But I have tried the discriminative score code from the commit for tf2 version and I also tried the discriminative score metric in the tf1 version. I even write a Pytorch version discriminative score metric with the same structure as the RNN model. All three models are under the same setting, using the same data as well as well test train splitting method, and running multiple times.

The order of magnitudes for tf2 and PyTorch are matched. However, in the tf1 version, the score is very low and the order of magnitudes is not at the same level as the other two. Could you provide any explanations?

TF1:
image
TF2:
image
PyTorch:
image

"T" is the static feature which in the code ?

In the many code place where show lots of time the "T: input time information", I was really wonder that "T" whether is a static feature? If not , what is the static feature in the dataset such as Stocks, Energy and so on.

Question on supervisor function

I am struggling to understand the actual function of supervisor.
G_loss_S = tf.losses.mean_squared_error(H[:,1:,:], H_hat_supervise[:,1:,:])
What this line acutally does? Doesn't it force supervisor to learn identity between H and H_hat_supervise?

What sequence should the supervisor generate? Is it generating from [0, T] to [1, T+1] or is it trying to reflect the original sequence [0, T]?

Generation of single return

Hi,
Is it possible to generate a single financial time series where the temporal relationship is present?

The discriminative score maybe have a error.

In the TimeGAN/metrics/discriminative_metrics.py file, line 123. Which is
y_label_final = np.concatenate((np.ones([len(y_pred_real_curr),]), np.zeros([len(y_pred_real_curr),])), axis = 0)
maybe it is
y_label_final = np.concatenate((np.ones([len(y_pred_real_curr),]), np.zeros([len(y_pred_fake_curr),])), axis = 0)
when len(y_pred_fake_curr)==len(y_pred_real_curr). it can work. But when it is not, it maybe wrong.

Error: AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'

Hi,
Thanks for the wonderful Time Gans approach. I was trying to execute the Juypter notebook from Colab and while executing the line " generated_data = timegan(ori_data, parameters) " I am getting Attribute Error. Have attached the error detail below. Kindly let me how to overcome this error. Thanks

Mritula

AttributeError Traceback (most recent call last)
in ()
1 # Run TimeGAN
----> 2 generated_data = timegan(ori_data, parameters)
3 print('Finish Synthetic Data Generation')

/content/cloned-repo/cloned-repo/timegan.py in timegan(ori_data, parameters)
36 """
37 # Initialization on the Graph
---> 38 tf.reset_default_graph()
39
40 # Basic Parameters

AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'

Error in Predictive score?

Paper:
Therefore, using the synthetic dataset, we train a post-hoc sequence-prediction model (by
optimizing a 2-layer LSTM) to predict next-step temporal vectors over each input sequence. Then,
we evaluate the trained model on the original dataset.

In the File: predictive_metrics.py
X_mb = list(generated_data[i][:-1, :(dim - 1)] for i in train_idx) T_mb = list(generated_time[i] - 1 for i in train_idx) Y_mb = list(np.reshape(generated_data[i][1:, (dim - 1)], [len(generated_data[i][1:, (dim - 1)]), 1]) for i in train_idx)

If I understand correctly: Here the prediction value is the last column of the next row.
Input: generated[i][0][:(dim-1)]
Expected output: generated[i+1][1][dim-1]

Is my understanding correct?

TF is depreciated

Hi !

First, thanks for the amazing work.
I would like to use this code but unfortunately, all the warning from your Jupiter Notebook are now errors since TensorFlow dropped the next version.

Is there any way to have the code updated?

Thanks and keep the good job !

Hello again! Thank you for your patience. The score in tf2 matched my experiments in PyTorch. Why would the significant performance degradation happen? Do you have any possible explanation? I have double-checked the tf2 code in the past commit, the only difference is the change depends on the version for discriminative scores.

The score in tf2 matched my experiments in PyTorch. Why would the significant performance degradation happen? Do you have any possible explanation? I have double-checked the tf2 code in the past commit, the only difference is the change depends on the version for discriminative scores.

Originally posted by @rriecc in #52 (comment)

MinMaxScaler

Hi!

Great work with the timeGAN! I got 2 questions concerning the normalization: why does the data get scaled in data_loader.py and then again in timegan.py? And why not use fro example a MinMaxScaler from sklearn with the advantage that it accepts strings?

Kind regards!

Why does the discriminative score contains np.abs(0.5-acc)?

The discriminative score is used to check the classification error on the generated data.
In file discriminative_metrics.py, line 125-129 we have

  # Compute the accuracy
  acc = accuracy_score(y_label_final, (y_pred_final>0.5))
  discriminative_score = np.abs(0.5-acc)
    
  return discriminative_score 

Why are subtracting from 0.5 and not 1? Is there a paper talking about this approach?

Why flip the data?

Hi!

Why does the data get flipped for the purpose of chronology. As far as I can see, the original dataset of energy data for example is already in chronological order.

Kind regards!

Labelled Data

Hi,
I have a doubt about how to implement multiclass labelled data with TimeGAN, as it is clear it takes unlabelled time-series data as input.
Do I need to train for each class seperately or include the class label as a feature itself?

Thanks in advance.

Reproducibility

I use tutorial_timegan.ipynb with its default hyperparameters but I can't reproduce the results from the paper. I get a discriminative score around 0.133 for the stock dataset and 0.498 for the energy dataset. The predictive score for the energy dataset is around 0.32 which is also higher than the score reported in the paper.

Saving the Trained TimeGAN model

First of all thanks a lot for your work. The approach is really interesting. I'm trying to follow the given example but I have some problems trying to save the model. If i use TimeGAN.save() I got the following error "AttributeError: 'TimeGAN' object has no attribute 'MODEL' "

I was wondering if I could receive some feedback. Maybe I am doing it wrong.

Thanks for your time.

Generate your own data

Hello, I would like to know how to use TIMEGAN on my own data? I see that the loaded data is limited to stock and energy, so do I have a way to generate other data?

Format of output of generated data

Hi!

When I generate data, let's say with da dataset with 1000 samples and sequence length 10 and 20 features, the output of the generated data should have the shape (1000, 10, 20), am I right?
How can I convert the data back in the shape of the original data, so (1000, 20)?

lstmLN

Hi!

What kind of module is lstmLN?

Kind regards!

AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'

Hi,
When trying to execute the timegan function; I am getting attribute error:
# Run TimeGAN generated_data = timegan(ori_data, parameters) print('Finish Synthetic Data Generation')

AttributeError Traceback (most recent call last)
in ()
1 # Run TimeGAN
----> 2 generated_data = timegan(ori_data, parameters)
3 print('Finish Synthetic Data Generation')

/content/timeGAN/timegan.py in timegan(ori_data, parameters)
36 """
37 # Initialization on the Graph
---> 38 tf.reset_default_graph()
39
40 # Basic Parameters

AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'

HOW TO OVERCOME THIS ERROR?

Discrepancy in between the loss terms mentioned in the paper and the code.

Dear author,

I have the following doubts related to the code :

  • In the following lines of the code TimeGAN.py line 202:
    # 3. Two Momments G_loss_V1 = tf.reduce_mean(tf.abs(tf.sqrt(tf.nn.moments(X_hat,[0])[1] + 1e-6) - tf.sqrt(tf.nn.moments(X,[0])[1] + 1e-6))) G_loss_V2 = tf.reduce_mean(tf.abs((tf.nn.moments(X_hat,[0])[0]) - (tf.nn.moments(X,[0])[0]))),
    To which term in the paper do these loss terms contribute?

  • Line 280 of the same file,
    if (check_d_loss > 0.15): _, step_d_loss = sess.run([D_solver, D_loss], feed_dict={X: X_mb, T: T_mb, Z: Z_mb})
    Why this value (0.15) has to be same for all the datasets?

  • In the paper, it was mentioned that z_t are being sampled from the Wiener process. However, in the code, they are being sampled from a uniform distribution, which is not very common in the realm of time series. Any specific reasons?

  • In the definition of L_S, the way it is written is giving the impression that g_X is just being used for one-step dynamics since h_t's are already there from the autoencoder's encoder, in contrast to what is given in the code. So how does that equation capture long-range dynamics?

Thanks for your time, and let me know if I am misconstruing something.

Two theoretical questions

First of all, I'd like to my gratitude for your interesting work, thank you.

I have two questions, which I would be happy and thankful if you could answer.

Q1: in the paper, at the end of section 3, you claim that Eqn. (1) and Eqn. (2) will take a Jensen-Shannon divergence and Kullback-Leibler divergence, respectively.

May you please explain why you draw such a conclusion?

Q2: to best my knowledge, during the training time-series models (e.g., a regressor or your timeGAN, etc.). It is recommended not to shuffle the time series.

However, you shuffle the data points. May you please clarify why? (Or, If I'm mistaken, please correct me).

no need for g_vars

On line 221 in timegan.py, we have GS_solver = tf.train.AdamOptimizer().minimize(G_loss_S, var_list = g_vars + s_vars). However g_vars are not involved in G_loss_S anywhere. Should we delete g_vars? Does the author intend to also train the generator in training the supervisor?

About Univariate Data

Hello Sir,

Thank you for impressive works, I try to generate consumption data, and it is really good.
I have a one issue, I want to train with only one column, but it gives error, how can I do?

Thanks in advance

Supervised Loss

Hi,

Would you be able to elaborate the following as to why H[:,1:,:], H_hat_supervise[:,:-1,:]?
G_loss_S = tf.losses.mean_squared_error(H[:,1:,:], H_hat_supervise[:,:-1,:])

I think if we were to deal with uni-directional time-series (such as a financial data), shouldn't it be the following so that we don't look ahead?
G_loss_S = tf.losses.mean_squared_error(H[:,:-1, :], H_hat_supervise[:,1:,:])

Thanks and hope to hear back from you soon!

question about static features

Hi, thanks for your great work in time series data generation. However, I have a question that not very clear to me.
In your paper, you said each time series instance consists of two elements: static features S and temporal features X. And the embedding and recovery functions map these two features to latent vector spaces (using an embedding network for static features and a recurrent embedding network for temporal features). Is that means that the input for embedding network is a tuple contains these two features, and if so how can we get the static fetures? (stastic features are not learned through the training process?). In your code, it seems the raw time series sequences are directly feed to the embedding network and there is only one embedding network. Can you explain for that? Thanks!

License

Hey Jinsung Yoon,

thanks to you and your colleagues for sharing the code to your amazing Paper "Time-series Generative Adversarial Networks".

I am currently working on my thesis and came across your implementation of the PCA/T-SNE visualization.
I don't want to use it exactly as it is right now, but wanted to derive my own implementation from your code. However, in order to do this, I wanted to ask about the licensing model for this repository or the code in general.

Could you perhaps include a license file in the repository for clarification?

Thanks in advance,
Florian

Results on `Sines` dataset.

The sines dataset synthesis code is different from the description in the paper.

and if I generation the sine wave as mentioned in paper $freq \in U[0,1], phase \in U[-\pi, \pi]$,

image

the predictive scores are completely different, Please does anyone know where the problem is?

original code:

  for k in range(dim):
    # Randomly drawn frequency and phase
    freq = np.random.uniform(0, 0.1)            
    phase = np.random.uniform(0, 0.1)
        
    # Generate sine signal based on the drawn frequency and phase
    temp_data = [np.sin(freq * j + phase) for j in range(seq_len)] 
    temp.append(temp_data)

My code:

for k in range(dim):
    # Randomly drawn frequency and phase
    freq = np.random.uniform(0, 1)
    phase = np.random.uniform(-np.pi, np.pi)

    # Generate sine signal based on the drawn frequency and phase
    temp_data = [np.sin(2 * np.pi * freq* j / float(seq_len) + phase) for j in range(seq_len)]
    temp.append(temp_data)

Supervised loss and generator

Supervised loss

G_loss_S = tf1.losses.mean_squared_error(H[:,1:,:], H_hat_supervise[:,:-1,:])

Train generator

_, step_g_loss_s = sess.run([GS_solver, G_loss_S], feed_dict={Z: Z_mb, X: X_mb, T: T_mb})

Why G_loss_S can be used to train generator? Actually, this generator means to supervisor?

Processing the Original Data and the Generated Data

Hi jsyoon0823,

Thanks for this research and the code, It is great!.

However, I had some doubts about the pre-processing of the original time series data.

In the research, we are assuming the i.i.d. distribution of the input datasets. So, we cut the time-series data (Stock data) and mix it to be i.i.d., and then generate synthetic data for the shorter time-series', but is there a way to get the original as well as the generated times series back like the input format (the format given in the excel sheets for the data)?

Saving and loading the GAN

I am trying to save the GAN for the generation of output without training again. However, I am having a problem with saving and loading the TimeGAN. I am trying to save the GAN using Saver.save function.

On the saving side,

saver = tf.train.Saver()   
sess = tf.Session()
sess.run(tf.global_variables_initializer())

(training)

saver.save(sess, save_path)

On the loading side,

saver = tf.train.import_meta_graph(meta_path)
sess = tf.Session()
saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))

however, it causes the error of uninitialized value for the generator.

I have tried another way of loading,

(define TimeGAN structure)

sess = tf.Session()
sess.run(tf.global_variables_initializer())
saver.restore(sess, tf.train.latest_checkpoint(checkpoint_path))

however, the result is very different compared to the one generated after training. I suspected that the generator simply used the initial value, instead of the trained value.

Am I doing anything wrong and how should I correctly save and load the GAN?

Classification with Discriminator

Hello, Thanks for providing the code for your paper

Could you please explain why the discriminator's (who's function is to classify each sample as real/fake) output Y_fake (for example) is a 3D tensor?

More specifically, shouldnt the classification CE loss - be classifying each sequence (aka sample ) instead of classifying at the time-step level?

Would this change make sense in this regard?

def discriminator (H, T):    
    with tf.variable_scope("discriminator", reuse = tf.AUTO_REUSE):
      d_cell = tf.nn.rnn_cell.MultiRNNCell([rnn_cell(module_name, hidden_dim) for _ in range(num_layers)])
     # use output for last time step instead
      d_outputs, d_last_states = tf.nn.dynamic_rnn(d_cell, H, dtype=tf.float32, sequence_length = T)
      Y_hat = tf.contrib.layers.fully_connected(d_outputs[:, -1, :], 1, activation_fn=None) 
    return Y_hat  

Trying to add conditions in TimeGAN

I am trying to add conditions similarly as we do for GAN to Conditional GAN. I tried to add the labels parameter to the input of the generator and discriminator but I get a value error for shape in the line Y_fake = discriminator(H_hat, T, y). I guess it is because it takes H_hat as the input, which in turn is the output of the supervisor H_hat = supervisor(E_hat, T). SO, do I need to add label parameters to the supervisor and embedder as well? Please can you guide me on how can I insert conditions for timegan?

generator and discriminator

training

Value error for Condition on timegan

Confused about s(tatic) feature

Hi, jsyoon0823:
Your paper looks great and thanks for providing the code for the paper.
En, I'm a little confused about the s(tatic) feature and stationary of input.

For stock case:

  • What should s mentioned in the paper represents? Can s represents average mean or other statistical features and makes x <= x_ori - mean(x_ori) in each window stationary over the long period?

And in your implements:

  • Where is s and z_s terms mentioned in the paper?
  • Don't we need to transform original stock data and make it stationary before input to the models?

Did I miss something?
Thanks.

Reproducibility

Could you share the hyperparameters that are needed to achieve the desired performance as listed in Table 2 of the paper? I got similar numbers for the discriminator to what Atrin78 mentioned here (#7) by using the default hyperparameters.

--data_name stock --seq_len 24 --module gru --hidden_dim 24 --num_layer 3 --iteration 10000 --batch_size 128 --metric_iteration 10
{'discriminative': 0.20225102319236016, 'predictive': 0.03779167104371475}
--data_name stock --seq_len 24 --module gru --hidden_dim 24 --num_layer 3 --iteration 50000 --batch_size 128 --metric_iteration 10
{'discriminative': 0.12578444747612552, 'predictive': 0.03670826588897788}
--data_name energy --seq_len 24 --module gru --hidden_dim 24 --num_layer 3 --iteration 50000 --batch_size 128 --metric_iteration 10
{'discriminative': 0.4985924423028151, 'predictive': 0.32724065483352016}

There are also some discrepancies between the hyperparameters used in the code and the hyperparameters mentioned in the paper. I roughly tried both versions and some of their combinations but was still not able to achieve the mentioned performance. Using the hyperparameters in the paper I got:
Stock:
run 1:
discriminative: 0.212; predictive: 0.039
run 2:
discriminative: 0.163; predictive: 0.040

Could you kindly share all the hyperparameters that are needed to get the performance in the paper (stock: .102, 0.038; energy: .236, .273)?

Additionally, as mentioned in #25, in the second stage of training, we have GS_solver = tf.train.AdamOptimizer().minimize(G_loss_S, var_list=g_vars + s_vars). However, G_loss_S does not seem to depend on g_vars and thus only s_vars are updated during this stage of training. So should it be GS_solver = tf.train.AdamOptimizer().minimize(G_loss_S, var_list=s_vars) only? Or is there something else I am missing? Based on tutorial_timegan.ipynb, in the second stage of training, the printed numbers on this Github repo are
Start Training with Supervised Loss Only
step: 0/10000, s_loss: 0.273
step: 1000/10000, s_loss: 0.0191
step: 2000/10000, s_loss: 0.009
step: 3000/10000, s_loss: 0.0071
step: 4000/10000, s_loss: 0.0054
step: 5000/10000, s_loss: 0.0054
step: 6000/10000, s_loss: 0.0042
step: 7000/10000, s_loss: 0.0041
step: 8000/10000, s_loss: 0.003
step: 9000/10000, s_loss: 0.0028

s_loss keeps decreasing and reaches 0.0028. However, when I tried running the code locally with all the hyperparameters being set to the same values, I get the following s_loss in the second stage:
step: 1000/10000, s_loss: 0.0263
step: 2000/10000, s_loss: 0.0234
step: 3000/10000, s_loss: 0.0222
step: 4000/10000, s_loss: 0.0205
step: 5000/10000, s_loss: 0.0201
step: 6000/10000, s_loss: 0.0187
step: 7000/10000, s_loss: 0.0202
step: 8000/10000, s_loss: 0.0196
step: 9000/10000, s_loss: 0.0196

s_loss seems to stay around 0.02 instead of 0.002 as in the uploaded notebook results. Could you confirm if there's anything I might be missing?

Thanks in advance!

Autoencoder and latent space

Hi,

first - great work!

I'm a bit confused about the role of the autoencoder. According to the paper, the embedder and recovery functions allowing the adversarial network to learn temporal dynamics via lower dimensional representations. When choosing hidden_dim = 24 as in the example, isn't it a higher dimensional representation in latent space?

Thanks and best

How to process the unfixed length samples of the datasets?

Hi, Yoon!
It is a great work for sequence generation! And I am a new at this field, May I ask a question that if I could use my own dataset with different length(timestamp) of each multidimensional sample(in another word,the sample has the shape as [timestamp, features], whose 'timestamp' is unfixed while 'features' is fixed), what should I do to process the data as well as the training model?
Yours, Hu!

Why only use a single batch for each iteration?

Hi,

I'm currently working on synthetic stock data generating project. And found TimeGan. Just wonder why the code use only a single mini-batch sampled (128 in example) for each step? not the whole set?

Cheers.

Simulation Reproducibility

Hi,
I have a trouble about how to reproduce the results on autoregressive multivariate Gaussian data on section 5.1. I write multivariate gaussian data generation code according to your description, but I failed finally. So, What should I do to reproduce it?
Here is my code, can you help me to solve this difficult? Thanks in advance!

def normalize(data):
    def min_max_norm(data):
        """ Normalize data to range [0, 1]. """
        min_val = np.min(data, axis=0)
        max_val = np.max(data, axis=0)
        data = (data - min_val)/(max_val - min_val + 1e-7)

        return data
    
    ori_data = np.array(data)

    return min_max_norm(ori_data)

def get_multi_gaussian_data(num_samples, seq_len, num_features, phi, sigma, burn_in=0):
    """ Multivariate Gaussian Data Generation

    Args:
        - num_samples: the number of samples
        - seq_len: sequence length of each time-series sample
        - num_features: the number of features of multivariate gaussian data
        - phi: control the correlation across time steps
        - sigma: control the correlation across features

    Returns:
        - data: generated multi-gaussian data, [num_samples, seq_len, num_features]    
    """
    # First: generate time-series data
    total_len = num_samples + seq_len + burn_in
    x = np.zeros((total_len, num_features))
    ## Set mean and covariace matrix for multivariate normal distribution
    one = np.ones(num_features)
    ide_mat = np.identity(num_features)     # identity matrix
    mu = np.zeros(num_features)
    cov_mat = sigma*one + (1-sigma)*ide_mat
    ## epsilon matrix
    eps_mat = np.random.multivariate_normal(mu, cov_mat, total_len)
    ## generate t+1 step from t step
    x[0, :] = 0     # x_0 = 0
    for t in range(1, total_len):   # x_1, x_2, ..., x_{total_len-1}
        x[t, :] = phi*x[t-1, :] + eps_mat[t-1, :]
    ## delete first burn_in samples
    x = x[burn_in:, :]

    # Second: data normalization
    x = normalize(x)

    # Third: samples generation
    ## Initialize the output
    data = []
    ## each sample has shape seq_len*num_features
    for i in range(x.shape[0] - seq_len):
        data_ = x[i:(i+seq_len), :]
        data.append(data_)
    
    return np.array(data)       # [num_samples, seq_len, num_features]

Random vector z_t from the Wiener process

Firstly, many thanks for your countless efforts on the work of TimeGAN and the distribution of the package to the community. I'm really enjoying implementing it!

One question to ask regarding the random vector, from the theory within the paper and the code implementation.

In the paper, z_t is thought to be the Wiener process which I think is reasonable since the time correlation is present throughout the series.
In the code implementation, however, I've noticed that the way you generate the random vector is via the random_generator function from the utils.py, which samples its element from the NumPy uniform function.

As far as I know, the NumPy uniform function is independent of the previously sampled value and does not form a Wiener process.

Am I missing something here?

Thanks.

GPU support

Is there any CUDA support available? As far as I can see, the raw timeGAN code doesn't possess proper CUDA support. If so, is the computing of timeGAN with the GPU planed in the future?

Frequency and Phase of the Sine Function

In the paper, the parameters of the sine is that frequency falls into the range of (0, 2π), and the phase [−π, π].
xi(t) = sin(2πηt + θ), where η ∼ U[0, 1] and θ ∼ U[−π, π].

However, in the code implementation, the range of both freq and phase is [0, 0.1].
freq = np.random.uniform(0, 0.1)
phase = np.random.uniform(0, 0.1)

Could you please explain the inconsistency? Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.