Git Product home page Git Product logo

Comments (26)

CubicQubit avatar CubicQubit commented on September 3, 2024

I know this is annoying and sorry to bother you! I'm trying to understand the proper and necessary parameters needed to achieve your results. Could you share the hyperparameters that worked best for you that may help to recover the curves in Figure 1? Thank you and I appreciate it.

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

Hi,

The hyperparameters you listed should be the ones we used for generating Figure 1. (The only difference is that we used sample_len=10.) The results you showed look very different to what we got. Here are some points I want to double-check. If you are using example_training(without_GPUTaskScheduler), then

  1. These hyperparameters are set inside https://github.com/fjxmlzn/DoppelGANger/blob/master/example_training(without_GPUTaskScheduler)/main.py, NOT https://github.com/fjxmlzn/DoppelGANger/blob/master/example_training/config.py.
  2. The results will be saved in example_training(without_GPUTaskScheduler)/test, NOT results. And since you are not using GPUTaskScheduler, the newer runs will override the results in this folder. You need to manage it manually.
  3. Make sure you finished all 400 epochs (as set in the parameters)

Let me know if you still have problems reproducing it.

from doppelganger.

CubicQubit avatar CubicQubit commented on September 3, 2024

Okay, I understand your points. That's exactly what I did.

  1. The hyperparameters are in main.py from the example_training(without_GPU) folder. They are:
generator = DoppelGANgerGenerator(
        feed_back=False,
        noise=True,
        feature_outputs=data_feature_outputs,
        attribute_outputs=data_attribute_outputs,
        real_attribute_mask=real_attribute_mask,
        sample_len=sample_len)
discriminator = Discriminator()
attr_discriminator = AttrDiscriminator()

epoch = 400
batch_size = 100
vis_freq = 200
vis_num_sample = 5
d_rounds = 1
g_rounds = 1
d_gp_coe = 10.0
attr_d_gp_coe = 10.0
g_attr_d_coe = 1.0
extra_checkpoint_freq = 5
num_packing = 1

which should be the same as in the parameters in config.py

  1. I can confirm the results are saved in example_training(without_GPUTaskScheduler)/test, and those are the mid-checkpoints that I used to generate the samples. I wrote a generate.py without_GPUTakScheduler, load those mid_checkpoints, and generate 100,000 samples (50k train, 50k test) to compare against the real data. This file basically just copy the generate_task from example_generating_data.

  2. I trained for all 400 epochs, this takes roughly 11-12 hrs clock time. However, I used the data sampled from the checkpoint at epoch 399 to generate the Figure 1. Maybe checkpoint 399 suffered from overfitting? Did you use a different checkpoint?

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

We used checkpoint 399 to generate it. I am not sure why it had bad resutls. Would you mind sharing the entire code folder (including example_training(without_GPUTaskScheduler)/main.py and generate.py you wrote) via Google Drive or others, so that I can look into it?

from doppelganger.

CubicQubit avatar CubicQubit commented on September 3, 2024

Hi, thank you for offering to help look at the code. I'm honestly not sure what went wrong either.

Here is the GDrive link with the code and epoch_id-399 data (generated samples, viz samples): https://drive.google.com/drive/folders/1M2QvzZjyEP9xevFYjNNurFmxEZKhVrEU?usp=sharing

Let me know if I can help with anything else. There should also be a TensorBoard file there.

from doppelganger.

fxctydfty avatar fxctydfty commented on September 3, 2024

I reproduced the same Figure-1.
acf

from doppelganger.

CubicQubit avatar CubicQubit commented on September 3, 2024

I'm guessing you used the training version with GPUTask and sample_len=10 right. I'll try that one next then.

from doppelganger.

fxctydfty avatar fxctydfty commented on September 3, 2024

I used without GPUTask and sample_len=10.

from doppelganger.

CubicQubit avatar CubicQubit commented on September 3, 2024

Oh okay nice, then I'm guessing it's either 1) I got a bad run, so might rerun again. or 2) I set up the python environment wrong (idk bad tensorflow, some floating points stuff). If you are using conda or any env, can you share with me your pip freeze. Thanks @alireza-msve. This is mainly because I also just ran the version without GPUTask without messing with anything else.

Edit: wait if you also used the version without GPUTask, did you write an extra file to generate the time-series, my generate.py file is in the GDrive, it should be the same as the generate task file with GPUTask.

from doppelganger.

fxctydfty avatar fxctydfty commented on September 3, 2024

I used your generate.py file. Pyhton-3.7.10 and tensorflow-1.14.0

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

Thank you @alireza-msve very much for sharing the information!

@CubicQubit Thanks for sharing the code you used. I wanted to debug this for you but unfortunately I just ran out of GPU hours on the cluster I am using several days ago, and it will take some time before I get the GPU hours. But here is some information that might be helpful:

  1. With or without GPUTaskScheduler shouldn't have any influence on the result, if the hyper-parameters are the same.
  2. For the results in the paper, we had 3 random trials for this dataset with sample_len=10. We picked a random run for drawing figure 1. I just checked all these runs and all of them have much better autocorrelation than the one you got:
    autocorr_all_runs.pdf

Since @alireza-msve used exactly the code you shared, I would suggest running it again to double-check. If you still get bad autocorrelation plots, please let me know.

from doppelganger.

fxctydfty avatar fxctydfty commented on September 3, 2024

If you have some time could please look at the below code for autocorrelation. I see for different epsilon values the figure doesn't change at all.
`
import torch

EPS = 0.55

def autocorr(X, Y):
Xm = torch.mean(X, 1).unsqueeze(1)
Ym = torch.mean(Y, 1).unsqueeze(1)
r_num = torch.sum((X - Xm) * (Y - Ym), 1)
r_den = torch.sqrt(torch.sum((X - Xm) ** 2, 1) * torch.sum((Y - Ym) ** 2, 1))

r_num[r_num == 0] = EPS
r_den[r_den == 0] = EPS

r = r_num / r_den
r[r > 1] = 0
r[r < -1] = 0

return r

def get_autocorr(feature):
feature = torch.from_numpy(feature)
feature_length = feature.shape[1]
autocorr_vec = torch.Tensor(feature_length - 2)

for j in range(1, feature_length - 1):
    autocorr_vec[j - 1] = torch.mean(autocorr(feature[:, :-j], feature[:, j:]))

return autocorr_vec

`

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

This EPS is for ensuring numerical stability when calculating autocorrelation, NOT the DP parameter. You should not change it.

The epsilon in DP results is controlled by

"dp_noise_multiplier": [0.01, 0.1, 1.0, 2.0, 4.0],
. The code will print the epsilon in DP parameter computed from it.

from doppelganger.

fxctydfty avatar fxctydfty commented on September 3, 2024

This EPS is for ensuring numerical stability when calculating autocorrelation, NOT the DP parameter. You should not change it.

The epsilon in DP results is controlled by

"dp_noise_multiplier": [0.01, 0.1, 1.0, 2.0, 4.0],

. The code will print the epsilon in DP parameter computed from it.

Is it possible to share the code for DP-autocorrelation?

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

The code is completely the same. You just generate data using https://github.com/fjxmlzn/DoppelGANger/tree/master/example_dp_generating_data, and then use #20 (comment) to draw autocorrelation.

The DP parameter (including epsilon) is printed from

print("Using DP training")
print("The final DP parameters will be:")
compute_dp_sgd_privacy(
self.data_feature.shape[0],
self.batch_size * self.num_packing,
noise_multiplier,
self.epoch * self.num_packing,
self.dp_delta)
sys.stdout.flush()
when you do training.

from doppelganger.

fxctydfty avatar fxctydfty commented on September 3, 2024

The code is completely the same. You just generate data using https://github.com/fjxmlzn/DoppelGANger/tree/master/example_dp_generating_data, and then use #20 (comment) to draw autocorrelation.

The DP parameter (including epsilon) is printed from

print("Using DP training")
print("The final DP parameters will be:")
compute_dp_sgd_privacy(
self.data_feature.shape[0],
self.batch_size * self.num_packing,
noise_multiplier,
self.epoch * self.num_packing,
self.dp_delta)
sys.stdout.flush()

when you do training.

Got it, Thank you.
But first two epsilon values are same with the updated arxiv version. But, the last three values - "eps = 9.39", "eps = 1.12", "eps = 0.349" are different.
where the "dp_noise_multiplier": [0.01, 0.1, 1.0, 2.0, 4.0]"

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

This is weird. Here is a minimum code for computing these epsilons.

from tensorflow_privacy.privacy.analysis.compute_dp_sgd_privacy_lib import compute_dp_sgd_privacy

if __name__ == "__main__":
    NOISE_MULTIPLIERS = [0.01, 0.1, 1.0, 2.0, 4.0]
    EPOCH = 15
    EPSILONS = [
        compute_dp_sgd_privacy(
            50000,
            100,
            noise_multiplier * 0.5,
            EPOCH,
            1e-5)[0]
        for noise_multiplier in NOISE_MULTIPLIERS]
    print(EPSILONS)

I am getting [187266998.24801102, 1641998.2480110272, 10.515654630508177, 1.451819290643501, 0.45555693961174304], which are the numbers of the arxiv version.

If you get different numbers from it, then probably it is because of TF Privacy updates. I am using TF Privacy 0.5.1.

from doppelganger.

fxctydfty avatar fxctydfty commented on September 3, 2024

Probably you are right. I run the above code again got similar values as previous. I am using TF privacy 0.6.0

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

Just double-checking, you mean you get the values you shared in #22 (comment) right?

from doppelganger.

fxctydfty avatar fxctydfty commented on September 3, 2024

Yes

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

Cool. Then it should be due to TF Privacy updates.

from doppelganger.

CubicQubit avatar CubicQubit commented on September 3, 2024

@fjxmlzn @fxctydfty man i hate tf. do you guys see these errors when running main.py? I'm seriously thinking it's my environment:

WARNING:tensorflow:Entity <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f063006e350>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x7f063006e350>>: AttributeError: module 'gast' has no attribute 'Index' WARNING:tensorflow:From /home/loctrinh/anaconda3/envs/doppelganger/lib/python3.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py:961: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance with the dtype argument instead of passing it to the constructor WARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f05c4197b90>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x7f05c4197b90>>: AttributeError: module 'gast' has no attribute 'Index' WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f05bc1a1410>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7f05bc1a1410>>: AttributeError: module 'gast' has no attribute 'Index'

pip install gast==0.2.2 --force-reinstall might fix this.

from doppelganger.

CubicQubit avatar CubicQubit commented on September 3, 2024

acf

I got something closer after rerunning. Still not the same, but I take it. @fjxmlzn thank you for your help! Please close this issue.

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

Great! This one looks close to what it should get.

from doppelganger.

rllyryan avatar rllyryan commented on September 3, 2024

Thank you @alireza-msve very much for sharing the information!

@CubicQubit Thanks for sharing the code you used. I wanted to debug this for you but unfortunately I just ran out of GPU hours on the cluster I am using several days ago, and it will take some time before I get the GPU hours. But here is some information that might be helpful:

  1. With or without GPUTaskScheduler shouldn't have any influence on the result, if the hyper-parameters are the same.
  2. For the results in the paper, we had 3 random trials for this dataset with sample_len=10. We picked a random run for drawing figure 1. I just checked all these runs and all of them have much better autocorrelation than the one you got:
    autocorr_all_runs.pdf

Since @alireza-msve used exactly the code you shared, I would suggest running it again to double-check. If you still get bad autocorrelation plots, please let me know.

image

After training for 12.5 hrs, using the without GPUTaskScheduler, on local machine (RTX 3060), this was my plot for ACF. I used Python 3.7.0 and Tensorflow 1.14.0. What's going on haha

from doppelganger.

fjxmlzn avatar fjxmlzn commented on September 3, 2024

@rllyryan Thanks for sharing the plot, but it looks weird. In follow-up projects, we ran DoppelGANGer on this dataset several more times, and we were able to get good autocorrelation plots quite stably

Let's discuss it in #46

from doppelganger.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.