Git Product home page Git Product logo

bgu-cs-vil / dtan Goto Github PK

View Code? Open in Web Editor NEW
65.0 65.0 14.0 9.44 MB

Official PyTorch implementation for our NeurIPS 2019 paper, Diffeomorphic Temporal Alignment Nets. TensorFlow\Keras version is available at tf_legacy branch.

License: MIT License

Python 13.84% Jupyter Notebook 80.41% C++ 1.98% Cuda 3.65% C 0.12%
alignment deep-learning pytorch temporal-transformer tensorflow time-series time-series-classification ucr

dtan's People

Contributors

aasthaengg avatar akryeem avatar dependabot[bot] avatar freifeld avatar ronshapiraweber avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dtan's Issues

AttributeError: 'list' object has no attribute 'dtype'

After training the model, I get this error in the plotting the outputs.

Traceback (most recent call last):
  File "UCR_alignment.py", line 111, in <module>
    run_UCR_alignment(args)
  File "UCR_alignment.py", line 104, in run_UCR_alignment
    DTAN.plot_RDTAN_outputs(model, X_train, y_train, ratio=[6,4])
  File "/home/dtan/DTAN/DTAN_layer.py", line 123, in plot_RDTAN_outputs
    plot_all_layers(model, X, y, self.n_recurrences, ratio)
  File "/home/dtan/helper/plot_transformer_layer.py", line 33, in plot_all_layers
    X_within_class_aligned = curr_layer([X_within_class])
  File "/home/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/python/keras/backend.py", line 2961, in __call__
    tensor_type = dtypes_module.as_dtype(tensor.dtype)
AttributeError: 'list' object has no attribute 'dtype'

If I remove the [] around X_within_class, I get another error:

Traceback (most recent call last):
  File "UCR_alignment.py", line 111, in <module>
    run_UCR_alignment(args)
  File "UCR_alignment.py", line 104, in run_UCR_alignment
    DTAN.plot_RDTAN_outputs(model, X_train, y_train, ratio=[6,4])
  File "/home/dtan/DTAN/DTAN_layer.py", line 123, in plot_RDTAN_outputs
    plot_all_layers(model, X, y, self.n_recurrences, ratio)
  File "/home/dtan/helper/plot_transformer_layer.py", line 33, in plot_all_layers
    X_within_class_aligned = curr_layer(X_within_class)
  File "/home/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/python/keras/backend.py", line 2937, in __call__
    raise TypeError('`inputs` should be a list or tuple.')
TypeError: `inputs` should be a list or tuple.

Is there a better way to plot results of the alignment?

DTAN training on personal time series. tensorflow.python.framework.errors_impl.InvalidArgumentError: unique expects a 1D vector.

I am trying to run the alignment network on my own dataset.

I modified UCR_alignment and use a X_train and y_train that have the dimensions (num_samples, time_series_length, 1) and (num_samples, ) respectively. This directly mirrors all UCR archive datasets formatting.

However, I am getting the following error when I use my personal dataset vs. UCR archive dataset. What am I doing wrong? I am sure it is not an issue with the tensorflow version?

/home/gk32721/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/python/ops/g                          radients_impl.py:108: UserWarning: Converting sparse IndexedSlices to a dense Te                          nsor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Train on 3 samples, validate on 1 samples
Epoch 1/1000
2020-07-06 09:43:21.556262: W tensorflow/core/framework/op_kernel.cc:1273] OP_RE                          QUIRES failed at transpose_op.cc:157 : Invalid argument: transpose expects a vec                          tor of size 1. But input(1) is a vector of size 2
Traceback (most recent call last):
  File "../examples/UCR_alignment.py", line 105, in <module>
    run_UCR_alignment(args, dataset_name="Computers")
  File "../examples/UCR_alignment.py", line 81, in run_UCR_alignment
    model, DTAN = run_alignment_network(X_train, y_train, args)
  File "/home/gk32721/dtan/models/train_model.py", line 62, in run_alignment_net                          work
    verbose=1)
  File "/home/gk32721/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/pyth                          on/keras/engine/training.py", line 1605, in fit
    validation_steps=validation_steps)
  File "/home/gk32721/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/pyth                          on/keras/engine/training_arrays.py", line 232, in fit_loop
    verbose=0)
  File "/home/gk32721/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/pyth                          on/keras/engine/training_arrays.py", line 436, in test_loop
    batch_outs = f(ins_batch)
  File "/home/gk32721/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/pyth                          on/keras/backend.py", line 2978, in __call__
    run_metadata=self.run_metadata)
  File "/home/gk32721/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/pyth                          on/client/session.py", line 1399, in __call__
    run_metadata_ptr)
  File "/home/gk32721/dtan/venv_dtan/lib/python3.6/site-packages/tensorflow/pyth                          on/framework/errors_impl.py", line 526, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: unique expects a 1D vector.
         [[{{node loss/Temporal_Alignment_Layer0_loss/Unique}} = Unique[T=DT_FLO                          AT, _class=["loc:@loss/...yScatterV3"], out_idx=DT_INT32, _device="/job:localhos                          t/replica:0/task:0/device:CPU:0"](loss/Temporal_Alignment_Layer0_loss/Squeeze)]]

Please reconfirm the python version in requriments.txt

According to my test, you have a high probability of marking the wrong python version in the requriments.txt(3.7.9), the correct configuration should be >=3.8.0.
Firstly, the Numpy==1.22.0 doesn't matching the python version.

ERROR: Ignored the following versions that require a different python version: 1.22.0 Requires-Python >=3.8; 1.22.1 Requires-Python >=3.8; 1.22.2 Requires-Python >=3.8; 1.22.3 Requires-Python >=3.8; 1.22.4 Requires-Python >=3.8; 1.23.0 Requires-Python >=3.8; 1.23.0rc1 Requires-Python >=3.8; 1.23.0rc2 Requires-Python >=3.8; 1.23.0rc3 Requires-Python >=3.8; 1.23.1 Requires-Python >=3.8; 1.23.2 Requires-Python >=3.8; 1.23.3 Requires-Python >=3.8; 1.23.4 Requires-Python >=3.8; 1.23.5 Requires-Python >=3.8; 1.24.0 Requires-Python >=3.8; 1.24.0rc1 Requires-Python >=3.8; 1.24.0rc2 Requires-Python >=3.8; 1.24.1 Requires-Python >=3.8; 1.24.2 Requires-Python >=3.8

And then, the minimum requirements of the libcpab need to be met Python >=3.8.0, or you will meet unsupported pickle protocol: 5error cause the pkl is generated in 3.8.

ERROR: Ignored the following versions that require a different python version: 1.10.0 Requires-Python <3.12,>=3.8; 1.10.0rc1 Requires-Python <3.12,>=3.8; 1.10.0rc2 Requires-Python <3.12,>=3.8; 1.10.1 Requires-Python <3.12,>=3.8; 1.8.0 Requires-Python >=3.8,<3.11; 1.8.0rc1 Requires-Python >=3.8,<3.11; 1.8.0rc2 Requires-Python >=3.8,<3.11; 1.8.0rc3 Requires-Python >=3.8,<3.11; 1.8.0rc4 Requires-Python >=3.8,<3.11; 1.8.1 Requires-Python >=3.8,<3.11; 1.9.0 Requires-Python >=3.8,<3.12; 1.9.0rc1 Requires-Python >=3.8,<3.12; 1.9.0rc2 Requires-Python >=3.8,<3.12; 1.9.0rc3 Requires-Python >=3.8,<3.12; 1.9.1 Requires-Python >=3.8,<3.12; 1.9.2 Requires-Python >=3.8; 1.9.3 Requires-Python >=3.8

Traceback (most recent call last):
File "UCR_alignment.py", line 121, in
run_UCR_alignment(args, dataset_name=args.dataset)
File "UCR_alignment.py", line 110, in run_UCR_alignment
model = train(train_loader, validation_loader, DTANargs, Experiment, print_model=True)
File "D:\Github\dtan\models\train_model.py", line 29, in train
zero_boundary=DTANargs.zero_boundary, device='gpu').to(device)
File "D:\Github\dtan\DTAN\DTAN_layer.py", line 53, in init
self.T = Cpab(tess, backend='pytorch', device=device, zero_boundary=zero_boundary, volume_perservation=False)
File "D:\Github\dtan\DTAN\libcpab\cpab.py", line 103, in init
self._dir, override)
File "D:\Github\dtan\DTAN\libcpab\core\tesselation.py", line 176, in init
zero_boundary, volume_perservation, direc, override)
File "D:\Github\dtan\DTAN\libcpab\core\tesselation.py", line 98, in init
self.dict = load_obj(self._basis_file)
File "D:\Github\dtan\DTAN\libcpab\core\utility.py", line 55, in load_obj
return pkl.load(f)
ValueError: unsupported pickle protocol: 5

And here are my successful packages-version lists:

  • python 3.8.0
  • matplotlib 3.3.2
  • numpy 1.23.5
  • pandas 1.5.3
  • Pillow 9.4.0
  • scikit-learn 1.2.2
  • scipy 1.5.2
  • seaborn 0.11.1
  • torch 1.5.0
  • torchvision 0.6.0
  • tqdm 4.56.0
  • tslearn 0.5.3.2

[Question]: Get alignment transformation

Hi @ronshapiraweber
We are using DTAN in one of our academic projects to align some time-series data, and we are wondering what is the right way to get the alignment transformation. By "alignment transformation" I mean the transformation that can be applied (after training) on a given misaligned signal and get the aligned signal as output.

So far I thought this could be the theta from stn function.
Based on some experiments I did so far, it doesn't seem to be the right transformation.
Do you have any suggestions or ideas on what is the most efficient and accurate way to get such a transformation after training phase completes?

Thanks,
Alaa

IndexError: theta transpose

I am working off of the PyTorch branch, and I am trying to use different data with DTAN. For some data, the PyTorch branch works perfectly well.

For other data, I get this following error: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Here is the traceback:

Traceback (most recent call last):
  File "/home/dtan/examples/UCR_alignment.py", line 115, in <module>
    run_UCR_alignment(args, dataset_name='pcgg_first_fourth')#, dataset_name="pfi_transfer_ECG")
  File "/home/dtan/examples/UCR_alignment.py", line 102, in run_UCR_alignment
    model = train(args, train_loader, validation_loader, DTANargs, Experiment, print_model=True)
  File "/home/dtan/models/train_model.py", line 40, in train
    train_loss = train_epoch(train_loader, device, optimizer, model, channels, DTANargs)
  File "/home/dtan/models/train_model.py", line 72, in train_epoch
    loss = alignment_loss(output, target, thetas, channels, DTANargs)
  File "/home/dtan/DTAN/alignment_loss.py", line 40, in alignment_loss
    prior_loss += 0.1*smoothness_norm(DTANargs.T, theta, DTANargs.lambda_smooth, DTANargs.lambda_var, print_info=False)
  File "/home/dtan/DTAN/smoothness_prior.py", line 103, in smoothness_norm
    theta_T = torch.transpose(theta, 0, 1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Any idea what might be causing this? Is there anything I have to do to the data to allow it to the data to make it work?

Thank you!

mode.save() not working

I know you all have said that you are aware of this problem, and plan to fix it in an upcoming release.

However, I am under a time crunch and cannot wait for the new release to save my trained model. Is there a workaround in TF 1.11 that you are aware of currently?

Multi-channel expected input format

Hi Ron,
Looking at your code, I see you have some checks for dealing with multichannel cases. However, I can't find any example or reference for the format you would expect for multichannel data.
Is multichannel data supported in your code, or I'm missing something?
If it is supported, in which format is it expected to be?

Thanks

Questions about DTAN.

Hello,

I am YJHong.

I reached your nice work while I am trying to solve time series matching problem.

My problem is just for sequence matching, don't need to classify sequences after aligning.

I have few questions about DTAN.

  • What if someone has variable data length of sequences? It seems like sequences in UCR dataset have almost fixed length, but in my case, sequence lengths vary too much. A first option I though is zero-padding short sequences to match max-length in a batch similar with speech tasks. But it may need some constraints telling some part of sequence in a batch are zero-padded. Can you give me a guide for this thing?

  • In DTAN, input signals U_i are aligned thorough f_loc and CPAB. But what I want to see is a kind of warping path from reference signal to target signal. Is it possible in DTAN?

Thank you in advance.

Regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.