Git Product home page Git Product logo

multi_agent_intersection's Introduction

Multi-Vehicle Trajectory Prediction at Intersections using State and Intention Information

Abstract

Traditional approaches to prediction of future trajectory of road agents rely on knowing information about their past trajectory. This work rather relies only on having knowledge of the current state and intended direction to make predictions for multiple vehicles at intersections. Furthermore, message passing of this information between the vehicles provides each one of them a more holistic overview of the environment allowing for a more informed prediction. This is done by training a neural network which takes the state and intent of the multiple vehicles to predict their future trajectory. Using the intention as an input allows our approach to be extended to additionally control the multiple vehicles to drive towards desired paths. Experimental results demonstrate the robustness of our approach both in terms of trajectory prediction and vehicle control at intersections.

image

Results

  • Controlling vehicles without message passing (baseline) [video]
  • Controlling vehicles with message passing. [video]

Setup

1) Packages Install

First you need to create a new environment and install some packages by running the following commands:

conda create -n mvn python=3.7
conda activate mvn
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
conda install pyg -c pyg
conda install matplotlib
conda install -c conda-forge cvxpy
conda install -c anaconda lxml
conda install -c anaconda pandas

2) Software Install

This research project is based on the SUMO-CARLA co-simulation, so you need to install these two traffic simulation software on your machine. More information about this co-simulation setup is available here.

Alternatively, you could install SUMO by running:

pip install eclipse-sumo

After that, you could check if the installation is successful. For CARLA, you could run:

cd ${Carla_folder}
# Navigate to the CARLA folder, e.g. cd /home/stud/zhud/Downloads/CARLA_0.9.10

bash CarlaUE4.sh    # Linux
# If you use Windows, execute CarlaUE4.exe

Then you should be able to see a city scenario shown in CARLA, as depicted in the following image. image

As for SUMO, after running

sumo-gui

in the terminal, you should be able to see an empty window of SUMO, as depicted in the following image. image

Run the Inference code

1) Put the inference code and the map in place

The code in this project is partially developed on top of the official code of CARLA-SUMO co-simulation. Thus, some official scripts need to be replaced by the files in this repository.

First navigate to the directory where Carla is installed, then copy the directories and the scripts in this repository and put them in the correct place as the following image shows.

image

Red: the scripts that we develop based on the official CARLA code, they need to be replaced by the scripts provided in this repository

Green: new created scripts or directories. They are provided in this repository and need to placed in the corresponding locations shown in the figure above

Blue: the original scripts or directories in the Carla folder that don't need to be replaced

2) Activate our map in CARLA

In this project, we create an intersection scenario. The 3D map of this scenario can be activated by:

conda activate mvn

cd ${Carla_folder}/PythonAPI/util   # e.g. cd /home/stud/zhud/Downloads/CARLA_0.9.10/PythonAPI/util

python config.py -x ../../Co-Simulation/Sumo/sumo_files/map/map_15m.xodr

Now the intersection scenario should have already been activated, as depicted in the following image. You can use the mouse and "W", "A", "S", "D" as arrow keys to change your perspective view. image

3) Run the inference code

First we need to set the environment variable SUMO_HOME properly, which should be the location of SUMO installation. If you installed SUMO from pip, you can get the location by running:

pip show eclipse-sumo
# On our machine, this path is: /usr/stud/zhud/miniconda3/envs/mvn/lib/python3.7/site-packages/sumo

Then set SUMO_HOME by:

export SUMO_HOME=${SUMO location}
# e.g. export SUMO_HOME=/usr/stud/zhud/miniconda3/envs/mvn/lib/python3.7/site-packages/sumo

Note: if you meet the problem of loading traci module (e.g. ImportError: No module named traci), you should check if SUMO_HOME path is set properly.

If the above steps all work properly, we can finally run the inference code to control the vehicles at this intersection! Just run the following commands:

cd ${Carla_folder}/Co-Simulation/Sumo   
# e.g. cd /home/stud/zhud/Downloads/CARLA_0.9.10/Co-Simulation/Sumo

python run_synchronization.py  ${SUMO_config_file}  --tls-manager carla  --sumo-gui  --step-length ${step_length} --pretrained-weights ${path_to_pretrained_weights}

# e.g. python run_synchronization.py  sumo_files/sumocfg/09-11-15-30-00400-0.09-val_10m_35m-7.sumocfg  --tls-manager carla  --sumo-gui  --step-length 0.1  --pretrained-weights  trained_params_archive/sumo_with_mpc_online_control/model_rot_gnn_mtl_wp_sumo_0911_e3_1910.pth

Now you should be able to see some vehicles appear and start moving, and the scenarios in SUMO and CARLA should be synchronized, as depicted in the following image (left: SUMO, right: CARLA). image

Training

In this repository, we also release the code for generating data from the SUMO simulator and training the model on your own.

1) Generate the dataset from SUMO

First, you can use generate_csv.py provided in this repository to generate the training set and validation set from SUMO by running:

cd ${folder of this repository} 
# e.g. cd /home/stud/zhud/Multi_Agent_Intersection

python generate_csv.py --num_seconds ${length of the generated sequence (unit: second)} --split ${train or val}
# e.g. python generate_csv.py --num_seconds 1000 --split train

The data (.csv format) will be generated in csv folder.

Note: the SUMO map used in the above execution is sumo/map/simple_separate_10m.net.xml. In case you want to design a new map, you can use netedit by running:

netedit     # or execute netedit.exe on Windows

2) Preprocess the data

In this project, we use MPC to augment the training set, which aims to improve the robustness of vehicle when it deviate from the center of the lane. The script preprocess.py is provided in this repository. Please run the following command in the terminal:

cd ${folder of this repository} 
# e.g. cd /home/stud/zhud/Multi_Agent_Intersection

python preprocess.py --csv_folder ${csv folder} --pkl_folder ${pkl folder} --num_mpc_aug ${number of MPC data augmentation}

# e.g. python preprocess.py --csv_folder csv/train --pkl_folder csv/train_pre --num_mpc_aug 2

# Note: in case you don't want to have MPC data augmentation, set num_mpc_aug to 0,
# e.g. python preprocess.py --csv_folder csv/train --pkl_folder csv/train_pre --num_mpc_aug 0

Now the preprocessed data (*.pkl) is available in pkl folder folder.

3) Train the model

Once the training set and validation set are obtained, you can begin to train your model by running:

python train_gnn.py --train_folder ${path to the training set} --val_folder ${path to the validation set} --epoch ${number of total training epochs} --exp_id ${experiment ID} --batch_size ${batch size}

# e.g. python train_gnn.py --train_folder csv/train_pre --val_folder csv/train_pre --epoch 20 --exp_id sumo_0402 --batch_size 20

Once the training process is finished, you can find the trained weights in trained_params/${exp_id} folder.

4) Run the inference on CARLA-SUMO co-simulation

If the above steps all work properly, now you can use the weights trained on your own to control the vehicles at the intersection as we showed you before.

cd ${Carla_folder}/Co-Simulation/Sumo   
# e.g. cd /home/stud/zhud/Downloads/CARLA_0.9.10/Co-Simulation/Sumo

python run_synchronization.py  ${SUMO_config_file}  --tls-manager carla  --sumo-gui  --step-length ${step_length} --pretrained-weights ${path_to_pretrained_weights}

# e.g. python run_synchronization.py  sumo_files/sumocfg/09-11-15-30-00400-0.09-val_10m_35m-7.sumocfg  --tls-manager carla  --sumo-gui  --step-length 0.1  --pretrained-weights /home/stud/zhud/Multi_Agent_Intersection/trained_params/sumo_0402/model_gnn_wp_sumo_0402_e3_0010.pth

Resources

The MPC module used in this repository to control the vehicles is modified from the code developed here. Meanwhile, as mentioned above, the SUMO and CARLA simulators were used for creating the intersection map and conducting the online evaluation. Please refer to the corresponding License of these resources regarding their usage.

multi_agent_intersection's People

Contributors

dekai21 avatar qakh avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.