Git Product home page Git Product logo

led's Introduction

Leapfrog Diffusion Model for Stochastic Trajectory Prediction (LED)

Official PyTorch code for CVPR'23 paper "Leapfrog Diffusion Model for Stochastic Trajectory Prediction".

1. Overview

system overview

Abstract: To model the indeterminacy of human behaviors, stochastic trajectory prediction requires a sophisticated multi-modal distribution of future trajectories. Emerging diffusion models have revealed their tremendous representation capacities in numerous generation tasks, showing potential for stochastic trajectory prediction. However, expensive time consumption prevents diffusion models from real-time prediction, since a large number of denoising steps are required to assure sufficient representation ability. To resolve the dilemma, we present LEapfrog Diffusion model (LED), a novel diffusion-based trajectory prediction model, which provides real-time, precise, and diverse predictions. The core of the proposed LED is to leverage a trainable leapfrog initializer to directly learn an expressive multi-modal distribution of future trajectories, which skips a large number of denoising steps, significantly accelerating inference speed. Moreover, the leapfrog initializer is trained to appropriately allocate correlated samples to provide a diversity of predicted future trajectories, significantly improving prediction performances. Extensive experiments on four real-world datasets, including NBA/NFL/SDD/ETH-UCY, show that LED consistently improves performance and achieves 23.7%/21.9% ADE/FDE improvement on NFL. The proposed LED also speeds up the inference 19.3/30.8/24.3/25.1 times compared to the standard diffusion model on NBA/NFL/SDD/ETH-UCY, satisfying real-time inference needs.

mean and variance estimation
Here, we present an example (above) to illustrate the mean and variance estimation in the leapfrog initializer under four scenes on the NBA dataset. We see that the variance estimation can well describe the scene complexity for the current agent by the learned variance, showing the rationality of our variance estimation.

2. Code Guidance

Overall project structure:

----LED\   
    |----README.md
    |----requirements.txt # packages to install                    
    |----main_led_nba.py  # [CORE] main file
    |----trainer\ # [CORE] main training files, we define the denoising process HERE!
    |    |----train_led_trajectory_augment_input.py 
    |----models\  # [CORE] define models under this file
    |    |----model_led_initializer.py                    
    |    |----model_diffusion.py    
    |    |----layers.py
    |----utils\ 
    |    |----utils.py 
    |    |----config.py
    |----data\ # preprocessed data (~200MB) and dataloader
    |    |----files\
    |    |    |----nba_test.npy
    |    |    |----nba_train.npy
    |    |----dataloader_nba.py
    |----cfg\ # config files
    |    |----nba\
    |    |    |----led_augment.yml
    |----results\ # store the results and checkpoints (~100MB)
    |----visualization\ # some visualization codes

Please download the data and results from Google Drive.

TODO list:

  • add training/evaluation for diffusion models (in two weeks).
  • more detailed descripition in trainers (in one month).
  • transfer the parameters in models into yaml.
  • other fast sampling methods (DDIM and PD).

2.1. Environment

We train and evaluate our model on Ubuntu=18.04 with RTX 3090-24G.

Create a new python environment (led) using conda:

conda create -n led python=3.7
conda activate led

Install required packages using Command 1 or 2:

# Command 1 (recommend):
pip install -r requirements.txt

# Command 2:
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install easydict
pip install glob2

2.2. Training

You can use the following command to start training the initializer.

python main_led_nba.py --cfg <-config_file_name_here-> --gpu <-gpu_index_here-> --train 1 --info <-experiment_information_here->

# e.g.
python main_led_nba.py --cfg led_augment --gpu 5 --train 1 --info try1

And the results are stored under the ./results folder.

2.3. Evaluation

We provide pretrained models under the ./checkpoints folder.

Reproduce. Using the command python main_led_nba.py --cfg led_augment --gpu 5 --train 0 --info reproduce and you will get the following results:

[Core Denoising Model] Trainable/Total: 6568720/6568720
[Initialization Model] Trainable/Total: 4634721/4634721 
./checkpoints/led_new.p  
--ADE(1s): 0.1766       --FDE(1s): 0.2694
--ADE(2s): 0.3693       --FDE(2s): 0.5642
--ADE(3s): 0.5817       --FDE(3s): 0.8366
--ADE(4s): 0.8095       --FDE(4s): 1.0960 

3. Citation

If you find this code useful for your research, please cite our paper:

@inproceedings{mao2023leapfrog,
  title={Leapfrog Diffusion Model for Stochastic Trajectory Prediction},
  author={Mao, Weibo and Xu, Chenxin and Zhu, Qi and Chen, Siheng and Wang, Yanfeng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5517--5526},
  year={2023}
}

4. Acknowledgement

Some code is borrowed from MID, NPSN and GroupNet. We thank the authors for releasing their code.

Star History Chart

led's People

Contributors

wbmao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

led's Issues

questions about stage 1 training

Hi,

Thank you for sharing your code. As mentioned in your paper, there are 2 stages of training. First for the denoising diffusion model, and second focuses on the leapfrog initializer. It seems the repo provides the code of stage 2 training, which loaded the pretraining checkpoint of the denoising diffusion model directly. Could you also provide the code for stage 1 training? Do you use the leapfrog initializer in the first stage? If so, what are the initialized values of the estimated mean, variance, and sample prediction you used? Thanks!

Missing test_interval in leg_argument.yml

  1. It appears that the test_interval parameter is missing in leg_argument.yml. This might be an oversight worth addressing.
  2. The test_batch_size in the configuration appears to be quite large. Could you shed some light on the rationale behind this setting?

Out of memory issue while running ''def _test_single_epoch(self):'' function

Hi there, I'm currently trying to reproduce the training and evaluation process but have run into an issue with the "def _test_single_epoch(self):" function causing an OOM error.

I have this issue running on a 24G RTX4090 GPU, it seems that this is not due to hardware limitations.

Do you have any suggestions on how to modify the function and release some memory during the evaluation?

########################################################
Trying to train from scratch
image

########################################################
modify
image
get
image
confirm the issue is due to ''def _test_single_epoch(self):'' function
########################################################
solely evaluating with pre-trained weight
image

cannot reproduce comparable results in the paper

Hi, I ran the code with a single GPU NVIDIA GeForce RTX 3090 with the given config file listed in the paper. Here is my reproduced result which is significantly different from the results provided in README.md file. Can you guide me through and specify what could be the issue? Can you provide more info on how to train a model with the same performance of your pre-trained model you provided in /checkpoints. Any help will be appreciated.

image

Code for ETH-UCY dataset

Thank you for your wonderful work!

Can you share the code for the ETH-UCY dataset dataloader?

Thank you.

NFL dataset

Thank you for your work and to have shared your code! I was wondering if you could provide the train/test split that you use for the NFL dataset.

Thank you in advance!

Code for SDD dataset

Hello,

Would you kindly share your preprocessing and other required codes for running LED on Stanford Drones dataset?

Thanks,
Sourav

There is no 'test_interval' attribute in 'Config' Class.

Hello author!
While running your model, you encountered the error AttributeError: 'Config' object has no attribute 'test_interval'. I checked the 'Config' Class and found that the variable 'test_interval' is not set. Please resolve this.

ETH Determinsitc Results

Hi there,

Do you have the deterministic results (when k=1) for the model applied to ETH data? I tried to produce them myself, but I did not find the config file for ETH data. Can you please generate them?

Best

raise ValueError in the save_data()

Hi, I was wondering why there is a raise ValueError at the very end of the save_data() in train_led_trajectory_augment_input.py. Is it because this method is obsolete so we do not want to use it? It seems that this method preps all the necessary *.pt files in the visualization module. Could you explain this line of code or is it a typo? Thanks

When to share code?

Hello Author!
I am very interested in your research. When will you make the code public? Can you provide code or clues on how the visualization trajectory is drawn? Thank you very much for your research, it helps me a lot.

wrongly written characters

wrongly written characters in the paper:

4.4. Training Objective
regulariser -> regularizer

5.2. Implementation Details
GTX-3090 -> RTX-3090 (20-40 series graphics cards are named after RTX)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.