Git Product home page Git Product logo

contingenciesfromobservations's Introduction

Contingencies From Observations

https://sites.google.com/view/contingency-planning/home

decision_tree

Purposes

  1. Serve as the accompanying code for ICRA 2021 paper: Contingencies from Observations.
  2. A framework for running scenarios with PRECOG models in CARLA.

安装CARLA

这个比较容易一些,直接去官方下载realse版本后,再把这个送到服务器上解压。 This repository requires CARLA 0.9.8. Please navigate to carla.org to download the correct packages, or do the following:

# Downloads hosted binaries
wget https://carla-releases.s3.eu-west-3.amazonaws.com/Linux/CARLA_0.9.8.tar.gz

# Unpack CARLA 0.9.8 download
tar -xvzf CARLA_0.9.8.tar.gz -C /path/to/your/desired/carla/install

Once downloaded, make sure that CARLAROOT is set to point to your copy of CARLA:

export CARLAROOT=/path/to/your/carla/install

CARLAROOT should point to the base directory, such that the output of ls $CARLAROOT shows the following files:

CarlaUE4     CHANGELOG   Engine  Import           LICENSE                        PythonAPI  Tools
CarlaUE4.sh  Dockerfile  HDMaps  ImportAssets.sh  Manifest_DebugFiles_Linux.txt  README     VERSION

安装

这个直接安装就可以了,非常顺利

conda create -n precog python=3.6.6
conda activate precog
# make sure to source this every time after activating, and make sure $CARLAROOT is set beforehand
source precog_env.sh
pip install -r requirements.txt

Note that CARLAROOT needs to be set and source precog_env.sh needs to be run every time you activate the conda env in a new window/shell.

Before running any of the experiments, you need to launch the CARLA server:

cd $CARLAROOT
./CarlaUE4.sh

下载CARLA数据集

这个可以去官方下载,注意的是超车数据一共是200个作为训练集。

The dataset used to train the models in the paper can be downloaded at this link.

一些关于生成数据集需要注意的点

1. Scenario Runner错误

  • 把相关代码中的set_timeout的时间2s设置为10s或者20s,这样确保了可以和客户端进行通讯.

2. 关于需要生成数据集的长度

  • 也许200个episode的长度就够用了

生成超车数据集的代码

Alternatively, data can be generated in CARLA via the scenario_runner.py script:

cd Experiment
python scenario_runner.py \
--enable-collecting \
--scenario 1 \
--location 0  

Episode data will be stored to Experiment/Data folder.

在这里我们需要右键EXPERIMENT并且将其设置为根目录,然后右键运行文件就可以了。 Then run:

cd Experiment
python Utils prepare_data.py

This will convert the episode data objects into json file per frame, and store them in Data/JSON_output folder.

CfO模型

The CfO model/architecture code is contained in the precog folder, and is based on the PRECOG repository with several key differences:

  1. The architecture makes use of a CNN to process the LiDAR range map for contextual input instead of a feature map (see precog/bijection/social_convrnn.py).
  2. The social features also include velocity and acceleration information of the agents (see precog/bijection/social_convrnn.py).
  3. The plotting script visualizes samples in a fixed set of coordinates with LiDAR overlayed on top (see precog/plotting/plot.py).

训练CfO 模型

Organize the json files into the following structure:

Custom_Dataset
---train
   ---feed_Episode_1_frame_90.json
   ...
---test
   ...
---val
   ...

Modify relevant precog/conf files to insert correct absolute paths.

这里需要注意的是,一些文件少写了右侧的单引号,需要补全。

Custom_Dataset.yaml
esp_infer_config.yaml
esp_train_config.yaml
shared_gpu.yaml
sgd_optimizer.yaml # set training hyperparameters

这里应该都是没问题的,注意运行前需要source precog_env.sh Then run:

export CUDA_VISIBLE_DEVICES=0;
python $PRECOGROOT/precog/esp_train.py \
dataset=Custom_Dataset \
main.eager=False \
bijection.params.A=2 \
optimizer.params.plot_before_train=True \
optimizer.params.save_before_train=True

此外还应该修改一些文件和内容,如添加这个返回meta_list的类型。

评估训练好的Cfo模型

其实只需要运行里面的test.sh即可

  • 意外地发现作者设置了他训练好的模型位置,那么可以直接用他训练好的模型生成一个测试的视频
  • 原理主要是,你需要先运行CARLA,然后使用Scenario Runner和他做互动,相机记录鸟瞰图和此刻的位置
  • 然后把这个图像再导出成视频

To evaluate a trained model in the CARLA simulator, run:

cd Experiment
python scenario_runner.py \
--enable-inference \
--enable-control \
--enable-recording \
--checkpoint_path /home/shy/Desktop/ContingenciesFromObservations-main/Model/esp_train_results/2021-01/01-24-20-31-06_Left_Turn_Dataset_precog.bijection.social_convrnn.SocialConvRNN_ \
--model_path /home/shy/Desktop/ContingenciesFromObservations-main/Model/esp_train_results/2021-01/01-24-20-31-06_Left_Turn_Dataset_precog.bijection.social_convrnn.SocialConvRNN_ \
--replan 4 \
--planner_type 0 \
--scenario 1 \
--location 0

A checkpoint of the model used in the paper is provided in Model/esp_train_results.

The example script test.sh will run the experiments from the paper and generate a video for each one. For reference, when using a Titan RTX GPU and Intel i9-10900k CPU each episode takes approximately 10 minutes to run, and the entire script takes several hours to run to completion.

Running the MFP baseline

Install the MFP baseline repo, and set MFPROOT to point to your copy:

export MFPROOT=/your/copy/of/mfp

Use the scenario_runner_mfp.py script to run the MFP model inside of the CARLA scenarios:

# left turn
python scenario_runner_mfp.py \
--enable-inference \
--enable-control \
--enable-recording \
--replan 4 \
--scenario 0 \
--location 0 \
--mfp_control \
--mfp_checkpoint CARLA_left_turn_scenario

# right turn
python scenario_runner_mfp.py \
--enable-inference \
--enable-control \
--enable-recording \
--replan 4 \
--scenario 2 \
--location 0 \
--mfp_control \
--mfp_checkpoint CARLA_right_turn_scenario

# overtake
python scenario_runner_mfp.py \
--enable-inference \
--enable-control \
--enable-recording \
--replan 4 \
--scenario 1 \
--location 0 \
--mfp_control \
--mfp_checkpoint CARLA_overtake_scenario

Citations

To cite this work, use:

@inproceedings{rhinehart2021contingencies,
    title={Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models},
    author={Nicholas Rhinehart and Jeff He and Charles Packer and Matthew A. Wright and Rowan McAllister and Joseph E. Gonzalez and Sergey Levine},
    booktitle={International Conference on Robotics and Automation (ICRA)},
    organization={IEEE},
    year={2021},
}

License

MIT

contingenciesfromobservations's People

Contributors

cpacker avatar jeffthehacker avatar sunhaoone avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.