emadeldeen24 / adatime Goto Github PK
View Code? Open in Web Editor NEW[TKDD 2023] AdaTime: A Benchmarking Suite for Domain Adaptation on Time Series Data
License: MIT License
[TKDD 2023] AdaTime: A Benchmarking Suite for Domain Adaptation on Time Series Data
License: MIT License
Hi,
First, thank you for this huge piece of work, it's a very useful one. I have a little question about CDAN Loss. I saw that you added a conditional entropy loss computed on the target features only, this doesn't seemsto be implemented (or maybe not this way) in the original CDAN code. What is the use of this loss and where does it come from ?
Best regards
Hello! I'm trying to run the project, but I have a little trouble.
data
har # tree -L 2 .
.
├── algorithms
│ ├── algorithms.py
│ └── __pycache__
├── configs
│ ├── data_model_configs.py
│ ├── hparams.py
│ ├── __pycache__
│ └── sweep_params.py
├── data
│ ├── HAR -> /home/xxx/research/dataset/HAR
│ └── README.md
...
13 directories, 20 files
python main.py --experiment_description exp1 \
--run_description run_1 \
--da_method DANN \
--backbone CNN \
--num_runs 5 \
--is_sweep False
Traceback (most recent call last):
File "main.py", line 45, in <module>
trainer = cross_domain_trainer(args)
File "/home/xxx/research/code/AdaTime/trainer.py", line 59, in __init__
self.dataset_configs, self.hparams_class = self.get_configs()
File "/home/xxx/research/code/AdaTime/trainer.py", line 203, in get_configs
dataset_class = get_dataset_class(self.dataset)
File "/home/xxx/research/code/AdaTime/configs/data_model_configs.py", line 4, in get_dataset_class
raise NotImplementedError("Dataset not found: {}".format(dataset_name))
NotImplementedError: Dataset not found: HAR
First of all, Thanks a lot for all your effort!
I just have a small concern regarding the usage of preprocessed versions of the data that you provide. Preprocessed datasets always include train and test splits, however, preprocessing scripts seem to include validation sets as well. (In some cases lines for val set are commented out, in some cases they are not.)
To create validation set (i.e. for early-stopping), what would you suggest me to do? Should I, for instance, further split train split into train and val. In that case, is there anything that I should take into account to prevent information leakage across the splits?
Best wishes,
PS: To make the issue more concrete, you can see in the WISDM preprocessing script that the dataset is first split into train and test, and the train set is further split into train and val. However, val set is not saved.
Hi emadeldeen24,
I'm trying to reproduce the results listed in your paper with the following setup:
python main.py --experiment_description domain-adapt-test --run_description domain-adapt-run --da_method Deep_Coral --dataset HAR --sweep_project_wandb domain-adapt-sweep --num_runs 1 --device cpu --is_sweep True --num_sweeps 1
Somehow the parameter is ignored and wandb runs an infinite number of sweeps (stopped run at 50) nevertheless.
Can you help?
Thanks a lot,
Nicole
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.