Git Product home page Git Product logo

inhwanbae / eigentrajectory Goto Github PK

View Code? Open in Web Editor NEW
79.0 4.0 4.0 5.03 MB

Official Code for "EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting (ICCV 2023)"

Home Page: https://ihbae.com/publication/eigentrajectory/

License: MIT License

Python 98.88% Shell 1.12%
deep-learning eigen-vector-decomposition singular-value-decomposition autonomous-vehicles human-trajectory-prediction iccv2023 motion-forecasting multi-agent

eigentrajectory's Introduction

EigenTrajectory: Low ̵ Rank Descriptors for Multi ̵ Modal Trajectory Forecasting

Inhwan Bae · Jean Oh · Hae-Gon Jeon
ICCV 2023

Project Page ICCV Paper Source Code Related Works



A common pipeline of trajectory prediction models and the proposed EigenTrajectory.


This repository contains the code for the EigenTrajectory(𝔼𝕋) space applied to the 10 traditional Euclidean-based trajectory predictors.
EigenTrajectory-LB-EBM achieves ADE/FDE of 0.21/0.34 while requiring only 1 hour for training!


🌌 EigenTrajectory(𝔼𝕋) Space 🌌

  • A novel trajectory descriptor based on Singular Value Decomposition (SVD), provides an alternative to traditional methods.
  • It employs a low-rank approximation to reduce the complexity and creates a compact space to represent pedestrian movements.
  • A new anchor-based refinement method to effectively encompass all potential futures.
  • It can significantly improve existing standard trajectory predictors by simply replacing the Euclidean space.

Model Training

Setup

Environment
All models were trained and tested on Ubuntu 18.04 with Python 3.7 and PyTorch 1.12.1 with CUDA 11.1.

Dataset
Preprocessed ETH and UCY datasets are included in this repository, under ./datasets/. The train/validation/test splits are the same as those fond in Social-GAN.

You can also download the dataset by running the following script.

./scripts/download_datasets.sh

Baseline models
This repository supports 10 baseline models: AgentFormer, DMRGCN, GPGraph-SGCN, GPGraph-STGCNN, Graph-TERN, Implicit, LBEBM, PECNet, SGCN and Social-STGCNN. We have included model source codes from their official GitHub in the ./baselines/ folder.

If you want to add your own baseline model, simply paste the model code into the baseline folder and add a few lines of initialization constructor and bridge code.

Train EigenTrajectory

To train our EigenTrajectory on the ETH and UCY datasets at once, we provide a bash script train.sh for a simplified execution.

./scripts/train.sh

We provide additional arguments for experiments:

./scripts/train.sh -t <experiment_tag> -b <baseline_model> -c <config_file_path> -p <config_file_prefix> -d <space_seperated_dataset_string> -i <space_seperated_gpu_id_string>

# Supported baselines: agentformer, dmrgcn, gpgraphsgcn, gpgraphstgcnn, graphtern, implicit, lbebm, pecnet, sgcn, stgcnn
# Supported datasets: eth, hotel, univ, zara1, zara2

# Examples
./scripts/train.sh -b sgcn -d "hotel" -i "1"
./scripts/train.sh -b agentformer -t EigenTrajectory -d "zara2" -i "2"
./scripts/train.sh -b pecnet -c ./config/ -p eigentrajectory -d "eth hotel univ zara1 zara2" -i "0 0 0 0 0"

If you want to train the model with custom hyper-parameters, use trainval.py instead of the script file.

python trainval.py --cfg <config_file_path> --tag <experiment_tag> --gpu_id <gpu_id> 

Model Evaluation

Pretrained Models

We provide pretrained models in the release section. You can download all pretrained models at once by running the script. This will download the 10 EigenTrajectory models.

./scripts/download_pretrained_models.sh

Evaluate EigenTrajectory

To evaluate our EigenTrajectory at once, we provide a bash script test.sh for a simplified execution.

./scripts/test.sh -t <experiment_tag> -b <baseline_model> -c <config_file_path> -p <config_file_prefix> -d <space_seperated_dataset_string> -i <space_seperated_gpu_id_string>

# Examples
./scripts/test.sh -b sgcn -d "hotel" -i "1"
./scripts/test.sh -b agentformer -t EigenTrajectory -d "zara2" -i "2"
./scripts/test.sh -b pecnet -c ./config/ -p eigentrajectory -d "eth hotel univ zara1 zara2" -i "0 0 0 0 0"

If you want to evaluate the model individually, you can use trainval.py with custom hyper-parameters.

python trainval.py --test --cfg <config_file_path> --tag <experiment_tag> --gpu_id <gpu_id> 

📖 Citation

If you find this code useful for your research, please cite our trajectory prediction papers :)

💬 LMTrajectory (CVPR'24) 🗨️ | 1️⃣ SingularTrajectory (CVPR'24) 1️⃣ | 🌌 EigenTrajectory (ICCV'23) 🌌 | 🚩 Graph‑TERN (AAAI'23) 🚩 | 🧑‍🤝‍🧑 GP‑Graph (ECCV'22) 🧑‍🤝‍🧑 | 🎲 NPSN (CVPR'22) 🎲 | 🧶 DMRGCN (AAAI'21) 🧶

@inproceedings{bae2023eigentrajectory,
  title={EigenTrajectory: Low-Rank Descriptors for Multi-Modal Trajectory Forecasting},
  author={Bae, Inhwan and Oh, Jean and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2023}
}
More Information (Click to expand)
@inproceedings{bae2024lmtrajectory,
  title={Can Language Beat Numerical Regression? Language-Based Multimodal Trajectory Prediction},
  author={Bae, Inhwan and Lee, Junoh and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

@inproceedings{bae2024singulartrajectory,
  title={SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model},
  author={Bae, Inhwan and Park, Young-Jae and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

@article{bae2023graphtern,
  title={A Set of Control Points Conditioned Pedestrian Trajectory Prediction},
  author={Bae, Inhwan and Jeon, Hae-Gon},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2023}
}

@inproceedings{bae2022gpgraph,
  title={Learning Pedestrian Group Representations for Multi-modal Trajectory Prediction},
  author={Bae, Inhwan and Park, Jin-Hwi and Jeon, Hae-Gon},
  booktitle={Proceedings of the European Conference on Computer Vision},
  year={2022}
}

@inproceedings{bae2022npsn,
  title={Non-Probability Sampling Network for Stochastic Human Trajectory Prediction},
  author={Bae, Inhwan and Park, Jin-Hwi and Jeon, Hae-Gon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

@article{bae2021dmrgcn,
  title={Disentangled Multi-Relational Graph Convolutional Network for Pedestrian Trajectory Prediction},
  author={Bae, Inhwan and Jeon, Hae-Gon},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2021}
}

Acknowledgement

Part of our code is borrowed from AgentFormer, DMRGCN, GP-Graph, Graph-TERN, Implicit, LB-EBM, PECNet, SGCN and Social-STGCNN. We thank the authors for releasing their code and models.


eigentrajectory's People

Contributors

inhwanbae avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

eigentrajectory's Issues

How to reconstruct the trajectory when inference?

Thanks for your work.

The method has some anchors of the coefficients. You use the minimal difference between the reconstruction and ground-truth trajectory to choose the right anchor when training. However, during inference, since we don't have the GT trajectory, how to choose the right anchor and do reconstrution?

How should I determine the corresponding "static_dist"

Thank you for your work!

When I was downloading the pretrained model file, I found that different datasets have a corresponding "static.dist" coefficient, such as the ETH dataset's "static.dist": 0.419, the ZARA1 dataset's "static.dist": 0.338, and so on. Now I want to use the SDD dataset for training. How should I determine the corresponding "static_dist"? How much is it confirmed?

Looking forward to your reply!

SDD datasets

Could you please provide the download link of the SDD dataset you used in the paper?

Code availability?

Thank you for sharing the great prediction method. When is the code going to be available?

GCS dataset?

Thanks for your amazing work, I am wondering that where can I download the GCS dataset
mentioned in your paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.