Git Product home page Git Product logo

std-mae's Introduction

[IJCAI-24] Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting

Our work is already accepted by IJCAI2024 main track. The citation information will be updated when the official IJCAI24 proceeding is online.

Framework

Preprint Link (All six datasets [PEMS03, 04, 07, 08, PEMS-BAY, and METR-LA] are included.)

Arxiv link

Google Scholar

Due to the modification of STD-MAE's title, you can simply search for "STD-MAE" in Google Scholar to get our article.

Citation

@article{gao2023spatio,
  title={Spatio-Temporal-Decoupled Masked Pre-training for Traffic Forecasting},
  author={Gao, Haotian and Jiang, Renhe and Dong, Zheng and Deng, Jinliang and Song, Xuan},
  journal={arXiv preprint arXiv:2312.00516},
  year={2023}
}

Performance on Spatiotemporal Forecasting Benchmarks

  • Please note you can get a much better performance on PEMS07 dataset using pre-training length of 2016. But it is a time-cosuming operation.

PWC PWC PWC PWC PWC PWC Main results.

METR-LA PEMS-BAY

💿 Dependencies

OS

Linux systems (e.g. Ubuntu and CentOS).

Python

The code is built based on Python 3.9, PyTorch 1.13.0, and EasyTorch. You can install PyTorch following the instruction in PyTorch.

Miniconda or Anaconda are recommended to create a virtual python environment.

We implement our code based on BasicTS.

Other Dependencies

pip install -r requirements.txt

Getting started

Download Data

You can download data from BasicTS and unzip it.

Preparing Data

  • Pre-process Data

You can pre-process all datasets by

cd /path/to/your/project
bash scripts/data_preparation/all.sh

Then the dataset directory will look like this:

datasets
   ├─PEMS03
   ├─PEMS04
   ├─PEMS07
   ├─PEMS08
   ├─raw_data
   |    ├─PEMS03
   |    ├─PEMS04
   |    ├─PEMS07
   |    ├─PEMS08
   ├─README.md

Pre-training on S-MAE and T-MAE

cd /path/yourproject

Then run the folloing command to run in Linux screen.

screen -d -m python stdmae/run.py --cfg='stdmae/TMAE_PEMS03.py' --gpus='0' 

screen -d -m python stdmae/run.py --cfg='stdmae/TMAE_PEMS04.py' --gpus='0'

screen -d -m python stdmae/run.py --cfg='stdmae/TMAE_PEMS07.py' --gpus='0' 

screen -d -m python stdmae/run.py --cfg='stdmae/TMAE_PEMS08.py' --gpus='0'

screen -d -m python stdmae/run.py --cfg='stdmae/SMAE_PEMS03.py' --gpus='0' 

screen -d -m python stdmae/run.py --cfg='stdmae/SMAE_PEMS04.py' --gpus='0'

screen -d -m python stdmae/run.py --cfg='stdmae/SMAE_PEMS07.py' --gpus='0' 

screen -d -m python stdmae/run.py --cfg='stdmae/SMAE_PEMS08.py' --gpus='0'

Downstream Predictor

After pre-training , copy your pre-trained best checkpoint to mask_save/. For example:

cp checkpoints/TMAE_200/064b0e96c042028c0ec44856f9511e4c/TMAE_best_val_MAE.pt mask_save/TMAE_PEMS04_864.pt
cp checkpoints/SMAE_200/50cd1e77146b15f9071b638c04568779/SMAE_best_val_MAE.pt mask_save/SMAE_PEMS04_864.pt

Then run the predictor as :

screen -d -m python stdmae/run.py --cfg='stdmae/STDMAE_PEMS04.py' --gpus='0' 

screen -d -m python stdmae/run.py --cfg='stdmae/STDMAE_PEMS03.py' --gpus='0' 

screen -d -m python stdmae/run.py --cfg='stdmae/STDMAE_PEMS08.py' --gpus='0'

screen -d -m python stdmae/run.py --cfg='stdmae/STDMAE_PEMS07.py' --gpus='0' 
  • To find the best result in logs, you can search best_ in the log files.

std-mae's People

Contributors

jimmy-7664 avatar moghadas76 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

std-mae's Issues

Question about Downstream

Hi, there!

First, I would like to appreciate you for your opening source code. I'm very interested to the Downstream Spatiotemporal Predictor in your essay. However, I haven't found any code about it. Could you show me where is it?
Uploading Screenshot 2024-05-30 at 19.18.05.png…

预训练权重测试指标以及学习率问题

作者您好,
我对配置文件stdmae/TMAE_PEMS03.py以及相应权重mask_save\SMAE_PEMS03_864.pt运行BasicTS中的runner.test_process,得到结果:[test_MAE: 99.9056, test_RMSE: 134.5602, test_MAPE: 1.3745];对配置文件stdmae/TMAE_PEMS04.py以及相应权重mask_save\SMAE_PEMS04_864.pt运行BasicTS中的runner.test_process,得到结果:[test_MAE: 113.2828, test_RMSE: 145.2737, test_MAPE: 1.7690]。这些测试结果中MAE指标远高于正常值,我本地训练TMAE_PEMS03 40轮的测试结果为[test_MAE: 13.3202, test_RMSE: 22.9870, test_MAPE: 0.1361],请问你提供的mask_save权重是否有误?
此外,BatchSize=64或128的情况下TMAE训练过程中的MAE指标不太理想,调整学习率后MAE指标还是远高于作者BatchSize=4情况下的设置,BatchSize=8的情况下也效果不好,但是BatchSize=4情况下训练速度过慢(BatchSize=128设置下大概是116秒一轮,BatchSize=4时约五分钟),请问应该如何设置学习率以适应较大的BatchSize?
谢谢

Table 2 Results

Hello, I would like to confirm if the final results in Table2 is the average of the 12 predicted steps?

Question about Pretraining

Thank you for such good work, I like it!
However, when I reproduced the codes, I found the pretraining process is too long (5 minutes for 1 epoch), that means you need one day for the pretraining, is that right?
I reproduced the pretraining process based on GPU 3090.

May I ask how can debug this project?

I have runned this project in terminal successfully.
But when I tried to run and debug the project run.py, it failed .
屏幕截图 2024-04-17 151124
Could you please tell me how I can debug this project?

GOOD job

你的图很漂亮,特别是FIG3,请问用什么软件绘制的,可以提供一下模版/code嘛

launch_training

May I ask if the definition of launch_training is missing in the code? I am looking forward to receiving your reply.

推理问题

训练好的模型怎么用来推理呢,怎么得到每一次的预测结果?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.