In this document, I would like to share some personal notes about the latest exciting trends in research about decision making for autonomous driving. I keep on updating it π· π§ π
Template:
"title"
[ Year
]
[π (paper)]
[ (code)]
[ποΈ (video)]
[ π
University X
]
[ π company Y
]
[ related
, concepts
]
Categories:
- Architecture and Map
- Behavioural Cloning, End-To-End and Imitation Learning
- Inverse Reinforcement Learning, Inverse Optimal Control and Game Theory
- Prediction and Manoeuvre Recognition
- Rule-based Decision Making
- Model-Free Reinforcement Learning
- Model-Based Reinforcement Learning
- Planning and Monte Carlo Tree Search
Besides, I reference additional publications in some parallel works:
- Hierarchical Decision-Making for Autonomous Driving
- Educational application of Hidden Markov Model to Autonomous Driving
- My 10 takeaways from the 2019 Intelligent Vehicle Symposium
Looking forward your reading suggestions!
"Overview of Tools Supporting Planning for Automated Driving"
-
[
2020
] [π] [ πvirtual vehicle research
] -
[
development tools
]
Click to expand
The authors group tools that support planning in sections: maps , communication , traffic rules , middleware , simulators and benchmarks . Source. |
About simulators and dataset . And how to couple between tools, either with co-simulation software or open interfaces. Source. |
About maps. ''The planning tasks with different targets entail map models with different level of details. HD map provides the most sufficient information and can be generally categorized into three layers: road model, lane model and localization model''. Source. |
Authors: Tong, K., Ajanovic, Z., & Stettinger, G.
-
Motivations:
1-
Help researchers to make full use of open-source resources and reduce effort of setting up a software platform that suites their needs.-
[example] "It is a good option to choose open-source
Autoware
as software stack along withROS
middleware, asAutoware
can be further transferred to a real vehicle. During the development, he or she can useCarla
as a simulator, to get its benefits of graphic rendering and sensor simulation. To make the simulation more realistic, he or she might adopt commercial softwareCarMaker
for sophisticated vehicle dynamics and open-sourceSUMO
for large-scale traffic flow simulation.OpenDRIVE
map can be used as a source and converted into the map format ofAutoware
,Carla
andSUMO
. Finally,CommonRoad
can be used to evaluate the developed algorithm and benchmark it against other approaches."
-
2-
Avoid reinventing the wheel.- Algorithms are available/adapted from robotics.
- Simulators are available/adapted from gaming.
-
Mentioned software libraries for
motion planning
:ROS
(from robotics):- Open Motion Planning Library (
OMPL
) MoveIt
- navigation package
teb
local planner.
- Open Motion Planning Library (
Python
:PythonRobotics
CPP
:CppRobotics
-
How to express traffic rules in a form understandable by an algorithm?
1-
Traffic rules can be formalized in higher order logic (e.g. using the Isabelle theorem prover) to check the compliance of traffic rules unambiguously and formally for trajectory validation.2-
Another approach is to represent traffic rules geometrically as obstacles in a configuration space of motion planning problem.-
"In some occasions, it is necessary to violate some rules during driving for achieving higher goals (i.e. avoiding collision) [... e.g. with] a rule book with hierarchical arrangement of different rules."
-
About data visualization?
-
What is missing for the research community?
- Evaluation tools for quantitative comparison.
- Evaluation tools incorporating human judgment, not only from the vehicle occupants but also from other road users.
- A standard format for motion datasets.
-
I am surprised
INTERACTION
dataset was not mentioned.
-
[
2020
] [π] [ πFZI
,KIT
] -
[
hierarchical behavioural planning
,cost-based arbitration
,behaviour components
]
Click to expand
Both urban and highway behaviour options are combined using a cost-based arbitrator . Together with Parking and AvoidCollisionInLastResort , these four arbitrators and the SafeStop fallback are composed together to the top-most priority-based AutomatedDriving arbitrator. Source. |
Top-right: two possible options. The arbitrator generally prefers the follow lane behaviour as long as it matches the route. Here, a lane change is necessary and selected by the cost-based arbitration: ChangeLaneRight has lower cost than FollowEgoLane , mainly due to the routing term in the cost expression. Bottom: the resulting behaviour selection over time. Source. |
Authors: Orzechowski, P. F., Burger, C., & Lauer, M.
-
Motivation:
- Propose an alternative to
FSM
s (finite state machines) andbehaviour
-based systems (e.g. voting systems) in hierarchical architectures. - In particular,
FSM
s can suffer from:- poor interpretability: why is one
behaviour
executed? - maintainability: effort to refine existing behaviour.
- scalability: effort to achieve a high number of
behaviour
s and to combine a large variety of scenarios. - options selection: "multiple behaviour options are applicable but have no clear and consistent priority against each other."
-
"How and when should an automated vehicle switch from a regular
ACC
controller to a lane change, cooperative zip merge or parking planner?"
-
- multiple planners: Each
behaviour
component can compute its manoeuvre command with any preferred state-of-the-art method.-
"How can we support
POMDPs
,hybrid A*
and any other planning method in our behaviour generation?".
-
- poor interpretability: why is one
- Propose an alternative to
-
Main idea:
cost-based arbitration
between so-called "behaviour
components".- The modularity of these components brings several advantages:
- Various scenarios can be handled within a single framework:
four-way intersections
,T-junctions
,roundabout
,multilane bypass roads
,parking
, etc. - Hierarchically combining behaviours, complex
behaviour
emerges from simple components. - Good efficiency: the atomic structure allows to evaluate
behaviour
options in parallel.
- Various scenarios can be handled within a single framework:
-
About
arbitration
:-
"An
arbitrator
contains a list of behavior options to choose from. A specific selection logic determines which option is chosen based on abstract information, e.g., expected utility or priority." -
[about
cost
] "Thecost
-basedarbitrator
selects the behavior option with the lowest expected cost." - Each behaviour option is evaluated based on its expected average travel velocity, incorporating routing costs and penalizing lane changes.
- The resulting behaviour can thus be well explained:
-
"The selection logic of arbitrators is comprehensive."
- About hierarchy:
-
"To generate even more complex
behaviour
s, anarbitrator
can also be abehaviour
option of a hierarchically higherarbitrator
."
-
-
-
About
behaviour
components.- There are the smallest building blocks, representing basic tactical driving manoeuvres.
- Example of atomic behaviour components for simple tasks in urban scenarios:
FollowLead
CrossIntersection
ChangeLane
- They can be specialized:
- Dense scenarios behaviours:
ApproachGap
,IndicateIntention
andMergeIntoGap
to refineChangeLane
(multi-phase behaviour).- Note: an alternative could be to use one single integrated interaction-aware behaviour such as
POMDP
.
- Note: an alternative could be to use one single integrated interaction-aware behaviour such as
- Highway behaviours (structured but high speed):
MergeOntoHighway
,FollowHighwayLane
,ChangeHighwayLane
,ExitFromHighway
. - Parking behaviours:
LeaveGarage
,ParkNearGoal
. - Fail-safe emergency behaviours:
EmergenyStop
,EvadeObject
,SafeStop
.
- Dense scenarios behaviours:
- For a
behaviour
to be selected, it should be applicable. Hence abehaviour
is defined together with:invocation
condition: when does it become applicable.-
"[example:] The
invocation
condition ofCrossIntersection
is true as long as the current ego lane intersects other lanes within its planning horizon."
-
commitment
condition: when does it stay applicable.
- This reminds me the concept of
macro actions
, sometimes defined by a tuple <applicability condition
,termination condition
,primitive policy
>. - It also makes me think of
MODIA
framework and other scene-decomposition approaches.
-
A
mid-to-mid
approach:-
"[input] The input is an abstract environment model that contains a fused, tracked and filtered representation of the world."
- [output] The selected high-level decision is passed to a trajectory planner or controller.
- What does the "decision" look like?
- One-size-fits-all is not an option.
- It is distinguished between maneuvers in a structured or unstructured environment:
1- unstructured
: a trajectory, directly passed to a trajectory planner.2- structured
: a corridor-based driving command, i.e. a tuple <maneuver corridor
,reference line
,predicted objects
,maneuver variant
>. It requires both a trajectory planner and a controller.
-
-
One distinction:
1-
top-downknowledge
-based systems.-
"The
action selection
in a centralized, in a top-down manner using a knowledge database." -
"The engineer designing the action selection module (in
FSM
s the state transitions) has to be aware of the conditions, effects and possible interactions of all behaviors at hand."
-
2-
bottom-upbehaviour
-based systems.-
"Decouple actions into atomic simple
behaviour
components that should be aware of their conditions and effects." - E.g.
voting systems
.
-
- Here the authors combine atomic behaviour components (
bottom
/down
) with more complex behaviours using generic arbitrators (top
/up
).
"A Review of Motion Planning for Highway Autonomous Driving"
- [
2019
] [π] [ πFrench Institute of Science and Technology for Transport
] [ πVEDECOM Institute
]
Click to expand
The review divides motion-planning into five parts. The decision-making part contains risk evaluation , criteria minimization , and constraint submission . In the last part, a low-level reactive planner deforms the generated motion from the high-level planner. Source. |
The review offers two detailed tools for comparing methods for motion planning for highway scenarios. Criteria for the generated motion include: feasible , safe , optimal , usable , adaptive , efficient , progressive and interactive . The authors stressed the importance of spatiotemporal consideration and emphasize that highway motion-planning is highly structured. Source. |
Contrary to solve-algorithms methods, set-algorithm methods require a complementary algorithm should be added to find the feasible motion. Depending on the importance of the generation (iv ) and deformation (v ) part, approaches are more or less reactive or predictive . Finally, based on their work on AI-based algorithms, the authors define four subfamilies to compare to human: logic , heuristic , approximate reasoning , and human-like . Source. |
The review also offers overviews for possible space configurations, i.e. the choices for decomposition of the evolution space (sampling-based decomposition , connected cells decomposition and lattice representation ) as well as path-finding algorithms (e.g. Dijkstra , A* , and RRT ). attractive and repulsive forces , parametric and semi-parametric curves , numerical optimization and artificial intelligence are also developed. Source. |
Authors: Claussmann, L., Revilloud, M., Gruyer, D., & Glaser, S.
"A Survey of Deep Learning Applications to Autonomous Vehicle Control"
- [
2019
] [π] [ πUniversity of Surrey
] [ πJaguar Land Rover
]
Click to expand
Challenges for learning-based control methods. Source. |
Authors: Kuutti, S., Bowden, R., Jin, Y., Barber, P., & Fallah, S.
- Three categories are examined:
lateral
control alone.longitudinal
control alone.longitudinal
andlateral
control combined.
- Two quotes:
-
"While lateral control is typically achieved through
vision
, the longitudinal control relies on measurements of relative velocity and distance to the preceding/following vehicles. This means that ranging sensors such asRADAR
orLIDAR
are more commonly used in longitudinal control systems.". -
"While lateral control techniques favour
supervised learning
techniques trained on labelled datasets, longitudinal control techniques favourreinforcement learning
methods which learn through interaction with the environment."
-
"Longitudinal Motion Planning for Autonomous Vehicles and Its Impact on Congestion: A Survey"
- [
2019
] [π] [ πGeorgia Institute of Technology
]
Click to expand
mMP refers to machine learning methods for longitudinal motion planning. Source. |
Authors: Zhou, H., & Laval, J.
- This review has been completed at a school of "civil and environmental engineering".
- It does not have any scientific contribution, but offers a quick overview about some current trends in
decision-making
. - The authors try to look at industrial applications (e.g.
Waymo
,Uber
,Tesla
), i.e. not just focussing on theoretical research. Since companies do no communicate explicitly about their approaches, most of their publications should be considered as research side-projects, rather than "actual state" of the industry.
- It does not have any scientific contribution, but offers a quick overview about some current trends in
- One focus of the review: the machine learning approaches for decision-making for longitudinal motion.
- About the architecture and representation models. They mention the works of
DeepDriving
and (H. Xu, Gao, Yu, & Darrell, 2016).Mediated perception
approaches parse an entire scene to make a driving decision.Direct perception
approaches first extract affordance indicators (i.e. only the information that are important for driving in a particular situation) and then map them to actions.-
"Only a small portion of detected objects are indeed related to the real driving reactions so that it would be meaningful to reduce the number of key perception indicators known as learning
affordances
".
-
Behavioural reflex
approaches directly map an input image to a driving action by a regressor.- This
end-to-end
paradigm can be extended with auxiliary tasks such as learning semantic segmentation (this "side task" should further improves the model), leading toPrivileged training
.
- This
- About the learning methods:
BC
,RL
,IRL
andGAIL
are considered.- The authors argue that their
memory
andprediction
abilities should make them stand out from the rule-based approaches. -
"Both
BC
andIRL
algorithms implicitly assume that the demonstrations are complete, meaning that the action for each demonstrated state is fully observable and available." -
"We argue that adopting
RL
transforms the problem of learnt longitudinal motion planning from imitating human demonstrations to searching for a policy complying a hand-crafted reward rule [...] No studies have shown that a genuine reward function for human driving really exists."
- About the architecture and representation models. They mention the works of
- About congestion:
-
"The AV industry has been mostly focusing on the long tail problem caused by
corner errors
related to safety, while the impact of AVs on traffic efficiency is almost ignored." - It reminds me the finding of (Kellett, J., Barreto, R., Van Den Hengel, A. & Vogiatzis. N., 2019) in "How Might Autonomous Vehicles Impact the City? The Case of Commuting to Central Adelaide": driverless cars could lead to more traffic congestion.
-
"Design Space of Behaviour Planning for Autonomous Driving"
- [
2019
] [π] [ πUniversity of Waterloo
]
Click to expand
Some figures:
The focus is on the BP module, together with its predecessor (environment ) and its successor (LP ) in a modular architecture. Source. |
Classification for Question 1 - environment representation. A combination is possible. In black, my notes giving examples. Source. |
Classification for Question 2 - on the architecture. Source. |
Classification for Question 3 - on the decision logic representation. Source. |
Authors: Ilievski, M., Sedwards, S., Gaurav, A., Balakrishnan, A., Sarkar, A., Lee, J., Bouchard, F., De Iaco, R., & Czarnecki K.
The authors divide their review into three sections:
- Question
1
: How to represent the environment? (relation with predecessor ofBP
)- Four representations are compared:
raw data
,feature-based
,grid-based
andlatent representation
.
- Four representations are compared:
- Question
2
: How to communicate with other modules, especially the local planner (LP
)? (relation with successor (LP
) ofBP
)- A first sub-question is the relevance of separation
BP
/LP
.- A complete separation (top-down) can lead to computational redundancy (both have a collision checker).
- One idea, close to sampling techniques, could be to invert the traditional architecture for planning, i.e. generate multiple possible local paths (
~LP
) then selects the best manoeuvre according to a given cost function (~BP
). But this exasperates the previous point.
- A second sub-question concerns prediction: Should the
BP
module have its own dedicated prediction module?- First, three kind of prediction are listed, depending on what should be predicted (marked with
->
):- Physics-based (
->
trajectory). - Manoeuvre-based (
->
low-level motion primitives). - Interaction-aware (
->
intent).
- Physics-based (
- Then, the authors distinguish between explicitly-defined and implicitly-defined prediction models:
- Explicitly-defined can be:
- Integrated with the motion planning process (called Internal prediction models) such as belief-based decision making (e.g.
POMDP
). Ideal for planning under uncertainty. - Decoupled from the planning process (called External prediction models). There is a clear interface between prediction and planning, which aids modularity.
- Integrated with the motion planning process (called Internal prediction models) such as belief-based decision making (e.g.
- Implicitly-defined, such as
RL
techniques.
- Explicitly-defined can be:
- First, three kind of prediction are listed, depending on what should be predicted (marked with
- A first sub-question is the relevance of separation
- Question
3
: How to makeBP
decisions? (BP
itself)- A first distinction in representation of decision logic is made depending based on non-learnt / learnt:
- Using a set of explicitly programmed production rules can be divided into:
Imperative
approaches, e.g. state machines.Declarative
approaches often based on some probabilistic system.- The decision-tree structure and the (
PO
)MDP
formulation makes it more robust to uncertainty. - Examples include MCTS and online POMDP solvers.
- The decision-tree structure and the (
- Logic representation can also rely on mathematical models with parameters learned a priori.
- A distinction is made depending on "where does the training data come from and when is it created?".
- In other words, one could think of supervised learning (learning from example) versus reinforcement learning (learning from interaction).
- The combination of both seems beneficial:
- An initial behaviour is obtained through imitation learning (learning from example). Also possible with
IRL
. - But improvements are made through interaction with a simulated environment (learning from interaction).
- By the way, the learning from interaction techniques raise the question of the origin of the experience (e.g. realism of the simulator) and its sampling efficiency.
- An initial behaviour is obtained through imitation learning (learning from example). Also possible with
- Another promising direction is hierarchical RL where the MDP is divided into sub-problems (the lower for
LP
and the higher forBP
)- The lowest level implementation of the hierarchy approximates a solution to the control and LP problem ...
- ... while the higher level selects a manoeuvre to be executed by the lower level implementations.
- Using a set of explicitly programmed production rules can be divided into:
- As mentioned in my the section on Scenarios and Datasets, the authors mention the lack of benchmark to compare and evaluate the performance of BP technologies.
- A first distinction in representation of decision logic is made depending based on non-learnt / learnt:
One quote about the representation of decision logic:
- As identified in my notes about IV19, the combination of learnt and non-learnt approaches looks the most promising.
-
"Without learning, traditional robotic solutions cannot adequately handle complex, dynamic human environments, but ensuring the safety of learned systems remains a significant challenge."
-
"Hence, we speculate that future high performance and safe behaviour planning solutions will be hybrid and heterogeneous, incorporating modules consisting of learned systems supervised by programmed logic."
"A Behavioral Planning Framework for Autonomous Driving"
-
[
2014
] [π] [ πCarnegie Mellon University
] [ πGeneral Motor
] -
[
behavioural planning
,sampling-based planner
,decision under uncertainty
,TORCS
]
Click to expand
Some figures:
Comparison and fusion of the hierarchical and parallel architectures. Source. |
The PCB algorithm implemented in the BP module. Source. |
Related work by (Xu, Pan, Wei, & Dolan, 2014) - Grey ellipses indicate the magnitude of the uncertainty of state. Source. |
Authors: Wei, J., Snider, J. M., & Dolan, J. M.
Note: I find very valuable to get insights from the CMU (Carnegie Mellon University) Team, based on their experience of the DARPA Urban Challenges.
- Related works:
- A prediction- and cost function-based algorithm for robust autonomous freeway driving. 2010 by (Wei, Dolan, & Litkouhi, 2010).
- They introduced the "Prediction- and Cost-function Based (
PCB
) algorithm" used. - The idea is to
generate
-forward_simulate
-evaluate
a set of manoeuvres. - The planner can therefore take surrounding vehiclesβ reactions into account in the cost function when it searches for the best strategy.
- At the time, the authors rejected the option of a
POMDP
formulation (computing the control policy over the space of the belief state, which is a probability distribution over all the possible states) deemed as computationally expensive. Improvements in hardware and algorithmic have been made since 2014.
- They introduced the "Prediction- and Cost-function Based (
- Motion planning under uncertainty for on-road autonomous driving. 2014 by (Xu, Pan, Wei, & Dolan, 2014).
- An extension of the framework to consider uncertainty (both for environment and the others participants) in the decision-making.
- The prediction module is using a Kalman Filter (assuming constant velocity).
- For each candidate trajectory, the uncertainty can be estimated using a Linear-Quadratic Gaussian (
LQG
) framework (based on the noise characteristics of the localization and control). - Their Gaussian-based method gives some probabilistic safety guaranty (e.g. likelihood
2%
of collision to occur).
- A prediction- and cost function-based algorithm for robust autonomous freeway driving. 2010 by (Wei, Dolan, & Litkouhi, 2010).
- Proposed architecture for decision-making:
- First ingredient: Hierarchical architecture.
- The hierarchy
mission
->
manoeuvre
->
motion
3M
concept makes it very modular but can raise limitations: -
"the higher-level decision making module usually does not have enough detailed information, and the lower-level layer does not have authority to re-evaluate the decision."
- The hierarchy
- Second ingredient: Parallel architecture.
- This is inspired from ADAS engineering.
- The control modules (
ACC
,Merge Assist
,Lane Centreing
) are relatively independent and work in parallel. - In some complicated cases needing cooperation, this framework may not perform well.
- This probably shows that just extending the common ADAS architectures cannot be enough to reach the level-
5
of autonomy.
- This probably shows that just extending the common ADAS architectures cannot be enough to reach the level-
- Idea of the proposed framework: combine the strengths of the hierarchical and parallel architectures.
- This relieves the path planner and the control module (the search space is reduced).
- Hence the computational cost shrinks (by over 90% compared to a sample-based planner in the spatio-temporal space).
- First ingredient: Hierarchical architecture.
- One module worth mentioning: Traffic-free Reference Planner.
- Its input: lane-level sub-missions from the Mission Planning.
- Its output: kinematically and dynamically feasible paths and a speed profile for the Behavioural Planner (
BP
).- It assumes there is no traffic on the road, i.e. ignores dynamic obstacles.
- It also applies traffic rules such as speed limits.
- This guides the
BP
layer which considers both static and dynamic obstacles to generate so-called "controller directives" such as:- The lateral driving bias.
- The desired leading vehicle to follow.
- The aggressiveness of distance keeping.
- The maximum speed.
"A Survey of End-to-End Driving: Architectures and Training Methods"
-
[
2020
] [π] [ πUniversity of Tartu
] -
[
review
,distribution shift problem
,domain adaptation
,mid-to-mid
]
Click to expand
Left: example of end-to-end architecture with key terms. Right: difference open-loop / close-loop evaluation. Source. |
Source. |
Source. |
Authors: Tampuu, A., Semikin, M., Muhammad, N., Fishman, D., & Matiisen, T.
-
A rich literature overview and some useful reminders about general
IL
andRL
concepts with focus toAD
applications.- It constitutes a good complement to the "Related trends in research" part of my video "From RL to Inverse Reinforcement Learning: Intuitions, Concepts + Applications to Autonomous Driving".
-
I especially like the structure of the document: It shows what one should consider when starting an
end-to-end
/IL
project forAD
:- I have just noted here some ideas I find interesting. In no way an exhaustive summary!
-
1-
Learning methods: working withrewards
(RL
) or withlosses
(behavioural cloning
).- About
distribution shift problem
inbehavioural cloning
:-
"If the driving decisions lead to unseen situations (not present in the training set), the model might no longer know how to behave".
- Most solutions try to diversify the training data in some way - either by collecting or generating additional data:
data augmentation
: e.g. one can place two additional cameras pointing forward-left and forward-right and associate the images with commands toturn right
andturn left
respectively.data diversification
: addition of temporally correlated noise and synthetic trajectory perturbations. Easier on "semantic" inputs than on camera inputs.on-policy learning
: recovery annotation andDAgger
. The expert provides examples how to solve situations the model-driving leads to. Also "Learning by cheating" by (Chen et al. 2019).balancing the dataset
: by upsampling the rarely occurring angles, downsampling the common ones or by weighting the samples.-
"Commonly, the collected datasets contain large amounts of repeated traffic situations and only few of those rare events."
- The authors claim that only the joint distribution of
inputs
andoutputs
defines the rarity of a data point. -
"Using more training data from
CARLA
Town1
decreases generalization ability inTown2
. This illustrates that more data without more diversity is not useful." - Ideas for augmentation can be taken from the field of
supervised Learning
where it is already an largely-addressed topic.
-
-
- About
RL
:- Policies can be first trained with IL and then fine-tuned with RL methods.
-
"This approach reduces the long training time of RL approaches and, as the RL-based fine-tuning happens online, also helps overcome the problem of IL models learning off-policy".
- About
domain adaptation
andtransfer
from simulation to real world (sim2real
).- Techniques from
supervised
learning, such asfine tuning
, i.e. adapting the driving model to the new distribution, are rarely used. - Instead, one can instead adapt the incoming data and keep the driving model fixed.
- A first idea is to transform real images into simulation-like images (the opposite - generating real-looking images - is challenging).
- One can also extract the semantic segmentation of the scene from both the real and the simulated images and use it as the input for the driving policy.
- Techniques from
- About
-
2-
Inputs.- In short:
Vision
is key.Lidar
andHD-map
are nice to have but expensive / tedious to maintain.- Additional inputs from independent modules (
semantic segmentation
,depth map
,surface normals
,optical flow
andalbedo
) can improve the robustness.
- About the
inertia problem
/causal confusion
when for instance predicting the nextego-speed
.-
"As in the vast majority of samples the current [observed] and next speeds [to be predicted] are highly correlated, the model learns to base its speed prediction exclusively on current speed. This leads to the model being reluctant to change its speed, for example to start moving again after stopping behind another car or a at traffic light."
-
- About
affordances
:-
"Instead of parsing all the objects in the driving scene and performing robust localization (as modular approach), the system focuses on a small set of crucial indicators, called
affordances
."
-
- In short:
-
3-
Outputs.-
"The outputs of the model define the level of understanding the model is expected to achieve."
- Also related to the
time horizon
:-
"When predicting instantaneous low-level commands, we are not explicitly forcing the model to plan a long-term trajectory."
-
- Three types of predictions:
3-1
Low-level commands.-
"The majority of end-to-end models yield as output the
steering angle
andspeed
(oracceleration
andbrake
commands) for the next timestep". - Low-level commands may be car-specific. For instance vehicles answer differently to the same
throttle
/steering
commands.-
"The function between steering wheel angle and the resulting turning radius depends on the car's geometry, making this measure specific to the car type used for recording."
-
-
[About the regression loss] "Many authors have recently optimized
speed
andsteering
commands usingL1
loss (mean absolute error,MAE
) instead ofL2
loss (mean squared error,MSE
)".
-
3-2
Future waypoints or desired trajectories.- This higher-level output modality is independent of car geometry.
3-3
Cost map, i.e. information about where it is safe to drive, leaving the trajectory generation to another module.
- About multitask learning and auxiliary tasks:
- The idea is to simultaneously train a separate set of networks to predict for instance
semantic segmentation
,optical flow
,depth
and other human-understandable representations from the camera feed. -
"Based on the same extracted visual features that are fed to the decision-making branch (main task), one can also predict
ego-speed
,drivable area
on the scene, andpositions
andspeeds
of other objects". - It offers more learning signals - at least for the shared layers.
- And can also help understand the mistakes a model makes:
-
"A failure in an auxiliary task (e.g. object detection) might suggest that necessary information was not present already in the intermediate representations (layers) that it shared with the main task. Hence, also the main task did not have access to this information and might have failed for the same reason."
-
- The idea is to simultaneously train a separate set of networks to predict for instance
-
-
4-
Evaluation: the difference betweenopen-loop
andclose-loop
.4-1
open-loop
: like insupervised
learning:- one question = one answer.
- Typically, a dataset is split into training and testing data.
- Decisions are compared with the recorded actions of the demonstrator, assumed to be the ground-truth.
4-2
close-loop
: like in decision processes:- The problem consists in a multi-step interaction with some environment.
- It directly measures the model's ability to drive on its own.
- Interesting facts: Good
open-loop
performance does not necessarily lead to good driving ability inclosed-loop
settings.-
"Mean squared error (
MSE
) correlates withclosed-loop
success rate only weakly (correlation coefficientr = 0.39
), soMAE
,quantized classification error
orthresholded relative error
should be used instead (r > 0.6
for all three)." - About the
balanced-MAE
metric foropen-loop
evaluation, which correlates better withclosed-loop
performance than simpleMAE
.-
"
Balanced-MAE
is computed by averaging the mean values of unequal-length bins according tosteering angle
. Because most data lies in the region around steering angle0
, equally weighting the bins grows the importance of rarely occurring (higher)steering angle
s."
-
-
-
5-
Interpretability:5-1
Either on thetrained
model ...-
"Sensitivity analysis aims to determine the parts of an input that a model is most sensitive to. The most common approach involves computing the gradients with respect to the input and using the magnitude as the measure of sensitivity."
VisualBackProp
: which input pixels influence the cars driving decision the most.
-
5-2
... or already duringtraining
.-
"
visual attention
is a built-in mechanism present already when learning. Where to attend in the next timestep (the attention mask), is predicted as additional output in the current step and can be made to depend on additional sources of information (e.g. textual commands)."
-
-
About
end-to-end
neural nets and humans:-
"[
StarCraft
,Dota 2
,Go
andChess
solved withNN
]. Many of these solved tasks are in many aspects more complex than driving a car, a task that a large proportion of people successfully perform even when tired or distracted. A person can later recollect nothing or very little about the route, suggesting the task needs very little conscious attention and might be a simple behavior reflex task. It is therefore reasonable to believe that in the near future anend-to-end
approach is also capable to autonomously control a vehicle."
-
"Efficient Latent Representations using Multiple Tasks for Autonomous Driving"
-
[
2020
] [π] [ πAalto University
] -
[
latent space representation
,multi-head decoder
,auxiliary tasks
]
Click to expand
The latent representation is enforced to predict the trajectories of both the ego vehicle and other vehicles in addition to the input image, using a multi-head network structure. Source. |
Authors: Kargar, E., & Kyrki, V.
- Motivations:
1-
Reduce the dimensionality of thefeature representation
of the scene - used as input to someIL
/RL
policy.- This is to improve most
mid-to-x
approaches that encode and process a vehicleβs environment as multi-channel and quite high-dimensional bird view images. ->
The idea here is to learn anencoder-decoder
.- The latent space has size
64
(way smaller than common64 x 64 x N
bird-views).
- This is to improve most
2-
Learn alatent representation
faster / with fewer data.- A single head decoder would just consider
reconstruction
. ->
The idea here is to use have multiple heads in the decoder, i.e. make prediction of multiple auxiliary application relevant factors.-
"The multi-head model can reach the single-head modelβs performance in
20
epochs, one-fifth of training time of the single-head model, with full dataset." -
"In general, the multi-heal model, using only
6.25%
of the dataset, converges faster and perform better than single head model trained on the full dataset."
- A single head decoder would just consider
3-
Learn apolicy
faster / with fewer data.
- Two components to train:
1-
Anencoder-decoder
learns to produce a latent representation (encoder
) coupled with a multiple-prediction-objective (decoder
).2-
Apolicy
use the latent representation to predict low-level controls.
- About the
encoder-decoder
:inputs
: bird-view image containing:- Environment info, built from
HD Maps
andperception
. - Ego trajectory:
10
past poses. - Other trajectory:
10
past poses. - It forms a
256 x 256
image, which is resized to64 x 64
to feed them into the models
- Environment info, built from
outputs
: multiple auxiliary tasks:1-
Reconstruction head: reconstructing the input bird-view image.2-
Prediction head:1s
-motion-prediction for other agents.3-
Planning head:1s
-motion-prediction for the ego car.
- About the
policy
:- In their example, the authors implement
behaviour cloning
, i.e. supervised learning to reproduce the decision ofCARLA autopilot
. 1-
steering
prediction.2-
acceleration
classification -3
classes.
- In their example, the authors implement
- How to deal with the unbalanced dataset?
- First, the authors note that no manual labelling is required to collect training data.
- But the recorded
steering
angle is zero most of the time - leading to a highly imbalanced dataset. - Solution (no further detail):
-
"Create a new dataset and balance it using sub-sampling".
-
"Robust Imitative Planning : Planning from Demonstrations Under Uncertainty"
-
[
2019
] [π] [ πUniversity of Oxford, UC Berkeley, Carnegie Mellon University
] -
[
epistemic uncertainty
,risk-aware decision-making
,CARLA
]
Click to expand
Illustration of the state distribution shift in behavioural cloning (BC ) approaches. The models (e.g. neural networks) usually fail to generalize and instead extrapolate confidently yet incorrectly, resulting in arbitrary outputs and dangerous outcomes. Not to mention the compounding (or cascading) errors, inherent to the sequential decision making. Source. |
Testing behaviours on scenarios such as roundabouts that are not present in the training set. Source. |
Above - in their previous work, the authors introduced Deep imitative models (IM ). The imitative planning objective is the log posterior probability of a state trajectory, conditioned on satisfying some goal G . The state trajectory that has the highest likelihood w.r.t. the expert model q (S given Ο ; ΞΈ ) is selected, i.e. maximum a posteriori probability (MAP ) estimate of how an expert would drive to the goal. This captures any inherent aleatoric stochasticity of the human behaviour (e.g., multi-modalities), but only uses a point-estimate of ΞΈ , thus q (s given Ο ;ΞΈ ) does not quantify model (i.e. epistemic ) uncertainty. Ο denotes the contextual information (3 previous states and current LIDAR observation) and s denotes the agentβs future states (i.e. the trajectory). Bottom - in this works, an ensemble of models is used: q (s given Ο ; ΞΈk ) where ΞΈk denotes the parameters of the k -th model (neural network). The Aggregation Operator operator is applied on the posterior p(ΞΈ given D ). The previous work is one example of that, where a single ΞΈi is selected. Source. |
To save computation and improve runtime to real-time, the authors use a trajectory library: they perform K-means clustering of the expert planβs from the training distribution and keep 128 of the centroids, allegedly reducing the planning time by a factor of 400 . During optimization, the trajectory space is limited to only that trajectory library. It makes me think of templates sometimes used for path-planning. I also see that as a way restrict the search in the trajectory space, similar to injecting expert knowledge about the feasibility of cars trajectories. Source. |
Estimating the uncertainty is not enough. One should then forward that estimate to the planning module. This reminds me an idea of (McAllister et al., 2017) about the key benefit of propagating uncertainty throughout the AV framework. Source. |
Authors: Tigas, P., Filos, A., Mcallister, R., Rhinehart, N., Levine, S., & Gal, Y.
-
Previous work:
"Deep Imitative Models for Flexible Inference, Planning, and Control"
(see below).- The idea was to combine the benefits of
imitation learning
(IL
) andgoal-directed planning
such asmodel-based RL
(MBRL
).- In other words, to complete planning based on some imitation prior, by combining generative modelling from demonstration data with planning.
- One key idea of this generative model of expert behaviour: perform context-conditioned density estimation of the distribution over future expert trajectories, i.e. score the "expertness" of any plan of future positions.
- Limitations:
- It only uses a point-estimate of
ΞΈ
. Hence it fails to capture epistemic uncertainty in the modelβs density estimate. -
"Plans can be risky in scenes that are out-of-training-distribution since it confidently extrapolates in novel situations and lead to catastrophes".
- It only uses a point-estimate of
- The idea was to combine the benefits of
-
Motivations here:
1-
Develop a model that captures epistemic uncertainty.2-
Estimating uncertainty is not a goal at itself: one also need to provide a mechanism for taking low-risk actions that are likely to recover in uncertain situations.- I.e. both
aleatoric
andepistemic
uncertainty should be taken into account in the planning objective. - This reminds me the figure of (McAllister et al., 2017) about the key benefit of propagating uncertainty throughout the AV framework.
- I.e. both
-
One quote about behavioural cloning (
BC
) that suffers from state distribution shift (co-variate shift
):-
"Where high capacity parametric models (e.g. neural networks) usually fail to generalize, and instead extrapolate confidently yet incorrectly, resulting in arbitrary outputs and dangerous outcomes".
-
-
One quote about model-free
RL
:-
"The specification of a reward function is as hard as solving the original control problem in the first place."
-
-
About
epistemic
andaleatoric
uncertainties:-
"Generative models can provide a measure of their uncertainty in different situations, but robustness in novel environments requires estimating
epistemic uncertainty
(e.g., have I been in this state before?), where conventional density estimation models only capturealeatoric uncertainty
(e.g., whatβs the frequency of times I ended up in this state?)."
-
-
How to capture uncertainty about previously unseen scenarios?
- Using an ensemble of density estimators and aggregate operators over the modelsβ outputs.
-
"By using demonstration data to learn density models over human-like driving, and then estimating its uncertainty about these densities using an ensemble of imitative models".
-
- The idea it to take the disagreement between the models into consideration and inform planning.
-
"When a trajectory that was never seen before is selected, the modelβs high
epistemic
uncertainty pushes us away from it. During planning, the disagreement between the most probable trajectories under the ensemble of imitative models is used to inform planning."
-
- Using an ensemble of density estimators and aggregate operators over the modelsβ outputs.
"End-to-end Interpretable Neural Motion Planner"
-
[
2019
] [π] [ πUniversity of Toronto
] [ πUber
] -
[
interpretability
,trajectory sampling
]
Click to expand
The visualization of 3D detection, motion forecasting as well as learned cost-map volume offers interpretability. A set of candidate trajectories is sampled, first considering the geometrical path and then then speed profile. The trajectory with the minimum learned cost is selected. Source. |
Source. |
Authors: Zeng W., Luo W., Suo S., Sadat A., Yang B., Casas S. & Urtasun R.
-
Motivation is to bridge the gap between the
traditional engineering stack
and theend-to-end driving
frameworks.1-
Develop a learnable motion planner, avoiding the costly parameter tuning.2-
Ensure interpretability in the motion decision. This is done by offering an intermediate representation.3-
Handle uncertainty. This is allegedly achieved by using a learnt, non-parametric cost function.4-
Handle multi-modality in possible trajectories (e.gchanging lane
vskeeping lane
).
-
One quote about
RL
andIRL
:-
"It is unclear if
RL
andIRL
can scale to more realistic settings. Furthermore, these methods do not produce interpretable representations, which are desirable in safety critical applications".
-
-
Architecture:
Input
: raw LIDAR data and a HD map.1st intermediate result
: An "interpretable" birdβs eye view representation that includes:3D
detections.- Predictions of future trajectories (planning horizon of
3
seconds). - Some spatio-temporal cost volume defining the goodness of each position that the self-driving car can take within the planning horizon.
2nd intermediate result
: A set of diverse physically possible trajectories (candidates).- They are
Clothoid
curves being sampled. First building thegeometrical path
. Then thespeed profile
on it. -
"Note that
Clothoid
curves can not handle circle and straight line trajectories well, thus we sample them separately."
- They are
Final output
: The trajectory with the minimum learned cost.
-
Multi-objective:
1-
Perception
Loss - to predict the position of vehicles at every time frame.- Classification: Distinguish a vehicle from the background.
- Regression: Generate precise object bounding boxes.
2-
Planning
Loss.-
"Learning a reasonable cost volume is challenging as we do not have ground-truth. To overcome this difficulty, we minimize the
max-margin
loss where we use the ground-truth trajectory as a positive example, and randomly sampled trajectories as negative examples." - As stated, the intuition behind is to encourage the demonstrated trajectory to have the minimal cost, and others to have higher costs.
- The model hence learns a cost volume that discriminates good trajectories from bad ones.
-
"Learning from Interventions using Hierarchical Policies for Safe Learning"
- [
2019
] [π] [ πUniversity of Rochester, University of California San Diego
] - [
hierarchical
,sampling efficiency
,safe imitation learning
]
Click to expand
The main idea is to use Learning from Interventions (LfI ) in order to ensure safety and improve data efficiency, by intervening on sub-goals rather than trajectories. Both top-level policy (that generates sub-goals) and bottom-level policy are jointly learnt. Source. |
Authors: Bi, J., Dhiman, V., Xiao, T., & Xu, C.
- Motivations:
1-
Improve data-efficiency.2-
Ensure safety.
- One term: "Learning from Interventions" (
LfI
).- One way to classify the "learning from expert" techniques is to use the frequency of expertβs engagement.
High frequency
-> Learning from Demonstrations.Medium frequency
-> learning from Interventions.Low frequency
-> Learning from Evaluations.
- Ideas of
LfI
:-
"When an undesired state is detected, another policy is activated to take over actions from the agent when necessary."
- Hence the expert overseer only intervenes when it suspects that an unsafe action is about to be taken.
-
- Two issues:
1-
LfI
(as forLfD
) learn reactive behaviours.-
"Learning a supervised policy is known to have 'myopic' strategies, since it ignores the temporal dependence between consecutive states".
- Maybe one option could be to stack frames or to include the current speed in the
state
. But that makes the state space larger.
-
2-
The expert only signals after a non-negligible amount of delay.
- One way to classify the "learning from expert" techniques is to use the frequency of expertβs engagement.
- One idea to solve both issues: Hierarchy.
- The idea is to split the policy into two hierarchical levels, one that generates
sub-goals
for the future and another that generatesactions
to reach those desired sub-goals. - The motivation is to intervene on sub-goals rather than trajectories.
- One important parameter:
k
- The top-level policy predicts a sub-goal to be achieved
k
steps ahead in the future. - It represents a trade-off between:
- The ability for the
top-level
policy to predict sub-goals far into the future. - The ability for the
bottom-level
policy to follow it correctly.
- The ability for the
- The top-level policy predicts a sub-goal to be achieved
- One question: How to deal with the absence of ground- truth sub-goals ?
- One solution is "Hindsight Experience Replay", i.e. consider an achieved goal as a desired goal for past observations.
- The authors present additional interpolation techniques.
- They also present a
Triplet Network
to train goal-embeddings (I did not understand everything).
- The idea is to split the policy into two hierarchical levels, one that generates
"Urban Driving with Conditional Imitation Learning"
Click to expand
The encoder is trained to reconstruct RGB , depth and segmentation , i.e. to learn scene understanding. It is augmented with optical flow for temporal information. As noted, such representations could be learned simultaneously with the driving policy, for example, through distillation. But for efficiency, this was pre-trained (Humans typically also have ~30 hours of driver training before taking the driving exam. But they start with huge prior knowledge). Interesting idea: the navigation command is injected as multiple locations of the control part. Source. |
Driving data is inherently heavily imbalanced, where most of the captured data will be driving near-straight in the middle of a lane. Any naive training will collapse to the dominant mode present in the data. No data augmentation is performed. Instead, during training, the authors sample data uniformly across lateral and longitudinal control dimensions. Source. |
Authors: Hawke, J., Shen, R., Gurau, C., Sharma, S., Reda, D., Nikolov, N., Mazur, P., Micklethwaite, S., Griffiths, N., Shah, A. & Kendall, A.
- Motivations:
1-
Learn bothsteering
andspeed
via Behavioural Cloning.2-
Use raw sensor (camera) inputs, rather than intermediate representations.3-
Train and test on dense urban environments.
- Why "conditional"?
- A route command (e.g.
turn left
,go straight
) resolves the ambiguity of multi-modal behaviours (e.g. when coming at an intersection). -
"We found that inputting the command multiple times at different stages of the network improves robustness of the model".
- A route command (e.g.
- Some ideas:
- Provide wider state observability through multiple camera views (single camera disobeys navigation interventions).
- Add temporal information via optical flow.
- Another option would be to stack frames. But it did not work well.
- Train the primary shared encoders and auxiliary independent decoders for a number of computer vision tasks.
-
"In robotics, the
test
data is the real-world, not a static dataset as is typical in mostML
problems. Every time our cars go out, the world is new and unique."
-
- One concept: "Causal confusion".
- A good video about Causal Confusion in Imitation Learning showing that "access to more information leads to worse generalisation under distribution shift".
-
"Spurious correlations cannot be distinguished from true causes in the demonstrations. [...] For example, inputting the current speed to the policy causes it to learn a trivial identity mapping, making the car unable to start from a static position."
- Two ideas during training:
- Using flow features to make the model use explicit motion information without learning the trivial solution of an identity mapping for speed and steering.
- Add random noise and use dropout on it.
- One alternative is to explicitly maintain a causal model.
- Another alternative is to learn to predict the speed, as detailed in "Exploring the Limitations of Behavior Cloning for Autonomous Driving".
- Output:
- The model decides of a "motion plan", i.e. not directly the low-level control?
- Concretely, the network gives one prediction and one slope, for both
speed
andsteering
, leading to two parameterised lines.
- Two types of tests:
1-
Closed-loop (i.e. go outside and drive).- The number and type of safety-driver interventions.
2-
Open-loop (i.e., evaluating on an offline dataset).- The weighted mean absolute error for
speed
andsteering
.- As noted, this can serve as a proxy for real world performance.
- The weighted mean absolute error for
-
"As discussed by [34] and [35], the correlation between
offline
open-loop
metrics andonline
closed-loop
performance is weak."
- About the training data:
- As stated, they are two levers to increase the performance:
1-
Algorithmic innovation.2-
Data.
- For this
IL
approach,30
hours of demonstrations. -
"Re-moving a quarter of the data notably degrades performance, and models trained with less data are almost undriveable."
- As stated, they are two levers to increase the performance:
- Next steps:
- I find the results already impressive. But as noted:
-
"The learned driving policies presented here need significant further work to be comparable to human driving".
-
- Ideas for improvements include:
- Add some predictive long-term planning model. At the moment, it does not have access to long-term dependencies and cannot reason about the road scene.
- Learn not only from demonstration, but also from mistakes.
- This reminds me the concept of
ChauffeurNet
about "simulate the bad rather than just imitate the good".
- This reminds me the concept of
- Continuous learning: Learning from corrective interventions would also be beneficial.
- The last point goes in the direction of adding learning signals, which was already done here.
- Imitation of human expert drivers (
supervised
learning). - Safety driver intervention data (
negative reinforcement
learning) and corrective action (supervised
learning). - Geometry, dynamics, motion and future prediction (
self-supervised
learning). - Labelled semantic computer vision data (
supervised
learning). - Simulation (
supervised
learning).
- Imitation of human expert drivers (
- I find the results already impressive. But as noted:
"Application of Imitation Learning to Modeling Driver Behavior in Generalized Environments"
Click to expand
The IL models were trained on a straight road and tested on roads with high curvature. PS-GAIL is effective only while surrounded by other vehicles, while the RAIL policy remained stably within the bounds of the road thanks to the additional rewards terms included into the learning process.. Source. |
Authors: Lange, B. A., & Brannon, W. D.
- One motivation: Compare the robustness (domain adaptation) of three
IL
techniques: - One take-away: This student project builds a good overview of the different
IL
algorithms and why these algorithms came out.- Imitation Learning (
IL
) aims at building an (efficient) policy using some expert demonstrations. - Behavioural Cloning (
BC
) is a sub-class ofIL
. It treatsIL
as a supervised learning problem: a regression model is fit to thestate
/action
space given by the expert.-
Issue of distribution shift: "Because data is not infinite nor likely to contain information about all possible
state
/action
pairs in a continuousstate
/action
space,BC
can display undesirable effects when placed in these unknown or not well-known states." -
"A cascading effect is observed as the time horizon grows and errors expand upon each other."
-
- Several solutions (not exhaustive):
1-
DAgger
: Ask the expert to say what should be done in some encountered situations. Thus iteratively enriching the demonstration dataset.2-
IRL
: Human driving behaviour is not modelled inside a policy, but rather capture into a reward/cost function.- Based on this reward function, an (optimal) policy can be derived with classic
RL
techniques. - One issue: It can be computationally expensive.
- Based on this reward function, an (optimal) policy can be derived with classic
3-
GAIL
(I still need to read more about it):-
"It fits distributions of states and actions given by an expert dataset, and a cost function is learned via Maximum Causal Entropy
IRL
." -
"When the
GAIL
-policy driven vehicle was placed in a multi-agent setting, in which multiple agents take over the learned policy, this algorithm produced undesirable results among the agents."
-
PS-GAIL
is therefore introduced for multi-agent driving models (agents share a single policy learnt withPS-TRPO
).-
"Though
PS-GAIL
yielded better results in multi-agent simulations thanGAIL
, its results still led to undesirable driving characteristics, including unwanted trajectory deviation and off-road duration."
-
RAIL
offers a fix for that: the policy-learning process is augmented with two types of reward terms:- Binary penalties: e.g. collision and hard braking.
- Smoothed penalties: "applied in advance of undesirable actions with the theory that this would prevent these actions from occurring".
- I see that technique as a way to incorporate knowledge.
- Imitation Learning (
- About the experiment:
- The three policies were originally trained on the straight roadway: cars only consider the lateral distance to the edge.
- In the "new" environment, a road curvature is introduced.
- Findings:
-
"None of them were able to fully accommodate the turn in the road."
PS-GAIL
is effective only while surrounded by other vehicles.- The smoothed reward augmentation helped
RAIL
, but it was too late to avoid off-road (the car is already driving too fast and does not dare ahard brake
which is strongly penalized). - The reward function should therefore be updated (back to reward engineering π ), for instance adding a harder reward term to prevent the car from leaving the road.
-
"Learning by Cheating"
-
[
2019
] [π] [] [ π
UT Austin
] [ πIntel Labs
] -
[
on-policy supervision
,DAgger
,conditional IL
,mid-to-mid
,CARLA
]
Click to expand
The main idea is to decompose the imitation learning (IL ) process into two stages: 1- Learn to act. 2- Learn to see. Source. |
mid-to-mid learning: Based on a processed birdβs-eye view map , the privileged agent predicts a sequence of waypoints to follow. This desired trajectory is eventually converted into low-level commands by two PID controllers. It is also worth noting how this privileged agent serves as an oracle that provides adaptive on-demand supervision to train the sensorimotor agent across all possible commands. Source. |
Example of privileged map supplied to the first agent. And details about the lateral PID controller that produces steering commands based on a list of target waypoints. Source. |
Authors: Chen, D., Zhou, B., Koltun, V. & KrΓ€henbΓΌhl, P
- One motivation: decomposing the imitation learning (
IL
) process into two stages:Direct IL
(from expert trajectories to vision-based driving) conflates two difficult tasks:1-
Learning to see.2-
Learning to act.
- One term: "Cheating".
1-
First, train an agent that has access to privileged information:-
"This privileged agent cheats by observing the ground-truth layout of the environment and the positions of all traffic participants."
- Goal: The agent can focus on learning to act (it does not need to learn to see because it gets direct access to the environmentβs state).
-
2-
Then, this privileged agent serves as a teacher to train a purely vision-based system (abundant supervision
).- Goal: Learning to see.
1-
First agent (privileged
agent):- Input: A processed
birdβs-eye view map
(with ground-truth information about lanes, traffic lights, vehicles and pedestrians) together with high-levelnavigation command
andcurrent speed
. - Output: A list of waypoints the vehicle should travel to.
- Hence
mid-to-mid
learning approach. - Goal: imitate the expert trajectories.
- Training: Behaviour cloning (
BC
) from a set of recorded expert driving trajectories.- Augmentation can be done offline, to facilitate generalization.
- The agent is thus placed in a variety of perturbed configurations to learn how to recover
- E.g. facing the sidewalk or placed on the opposite lane, it should find its way back onto the road.
- Input: A processed
2-
Second agent (sensorimotor
agent):- Input: Monocular
RGB
image, currentspeed
, and a high-levelnavigation command
. - Output: A list of waypoints.
- Goal: Imitate the privileged agent.
- Input: Monocular
- One idea: "White-box" agent:
- The internal state of the
privileged
agent can be examined at will.- Based on that, one could test different high-level commands: "What would you do now if the command was [
follow-lane
] [go left
] [go right
] [go straight
]".
- Based on that, one could test different high-level commands: "What would you do now if the command was [
- This relates to
conditional IL
: all conditional branches are supervised during training.
- The internal state of the
- Another idea: "online learning" and "on-policy supervision":
-
"βOn-policyβ refers to the sensorimotor agent rolling out its own policy during training."
- Here, the decision of the second agents are directly implemented (
close-loop
). - And an oracle is still available for the newly encountered situation (hence
on-policy
), which also accelerates the training. - This is an advantage of using a simulator: it would be difficult/impossible in the physical world.
- Here, the decision of the second agents are directly implemented (
- Here, the second agent is first trained
off-policy
(on expert demonstration) to speed up the learning (offline BC
), and only then goon-policy
:-
"Finally, we train the sensorimotor agent
on-policy
, using the privileged agent as an oracle that provides adaptive on-demand supervision in any state reached by the sensorimotor student." - The
sensorimotor
agent can thus be supervised on all its waypoints and across all commands at once.
-
- It resembles the Dataset aggregation technique of
DAgger
:-
"This enables automatic
DAgger
-like training in which supervision from the privileged agent is gathered adaptively via online rollouts of the sensorimotor agent."
-
-
- About the two benchmarks:
- Another idea: Do not directly output low-level commands.
- Instead, predict waypoints and speed targets.
- And rely on two
PID
controllers to implement them.-
1-
"We fit a parametrized circular arc to all waypoints using least-squares fitting and then steer towards a point on the arc." -
2-
"A longitudinalPID
controller tries to match a target velocity as closely as possible [...] We ignore negative throttle commands, and only brake if the predicted velocity is below some threshold (2 km/h
)."
-
"Deep Imitative Models for Flexible Inference, Planning, and Control"
-
[
2019
] [π] [ποΈ] [ πCarnegie Mellon University, UC Berkeley
] -
[
conditional IL
,model-based RL
,CARLA
]
Click to expand
The main motivation is to combine the benefits of IL (to imitate some expert demonstrations) and goal-directed planning (e.g. model-based RL ). Source. |
Ο represents the scene. It consists of the current lidar scan , previous states in the trajectory as well as the current traffic light state . Source. |
From left to right: Point , Line-Segment and Region (small and wide) Final State Indicators used for planning. Source. |
Comparison of features and implementations. Source. |
Authors: Rhinehart, N., McAllister, R., & Levine, S.
-
Main motivation: combine the benefits of
imitation learning
(IL
) andgoal-directed planning
such asmodel-based RL
(MBRL
).- Especially to generate interpretable, expert-like plans with offline learning and no reward engineering.
- Neither
IL
norMBRL
can do so. - In other words, it completes planning based on some imitation prior.
-
One concept: "Imitative Models"
- They are _"probabilistic predictive models able to plan interpretable expert-like trajectories to achieve new goals".
- As for
IL
-> use expert demonstration:- It generates expert-like behaviors without reward function crafting.
- The model is learnt "offline" also means it avoids costly online data collection (contrary to
MBRL
). - It learns dynamics desirable behaviour models (as opposed to learning the dynamics of possible behaviour done by
MBRL
).
- As for
MBRL
-> use planning:- It achieves new goals (goals that were not seen during training). Therefore, it avoids the theoretical drift shortcomings (distribution shift) of vanilla behavioural cloning (
BC
). - It outputs (interpretable)
plan
to them at test-time, whichIL
cannot. - It does not need goal labels for training.
- It achieves new goals (goals that were not seen during training). Therefore, it avoids the theoretical drift shortcomings (distribution shift) of vanilla behavioural cloning (
- Binding
IL
andplanning
:- The learnt
imitative model
q(S|Ο)
can generate trajectories that resemble those that the expert might generate.- These manoeuvres do not have a specific goal. How to direct our agent to goals?
- General tasks are defined by a set of goal variables
G
.- At test time, a route planner provides waypoints to the imitative planner, which computes expert-like paths for each candidate waypoint.
- The best plan is chosen according to the planning objective (e.g. prefer routes avoiding potholes) and provided to a low-level
PID
-controller in order to producesteering
andthrottle
actions. - In other words, the derived plan (list of set-points) should be:
- Maximizing the similarity to the expert demonstrations (term with
q
) - Maximizing the probability of reaching some general goals (term with
P(G)
).
- Maximizing the similarity to the expert demonstrations (term with
- How to represent goals?
dim=0
- with points:Final-State Indicator
.dim=1
- with lines:Line-Segment Final-State Indicator
.dim=2
- with areas (regions):Final-State Region Indicator
.
- The learnt
-
How to deal with traffic lights?
- The concept of
smart waypointer
is introduced. -
"It removes far waypoints beyond
5
meters from the vehicle when a red light is observed in the measurements provided by CARLA". -
"The planner prefers closer goals when obstructed, when the vehicle was already stopped, and when a red light was detected [...] The planner prefers farther goals when unobstructed and when green lights or no lights were observed."
- The concept of
-
About interpretability and safety:
-
"In contrast to black-box one-step
IL
that predicts controls, our method produces interpretable multi-step plans accompanied by two scores. One estimates the planβsexpertness
, the second estimates its probability to achieve the goal."- The
imitative model
can produce some expert probability distribution function (PDF
), hence offering superior interpretability to one-stepIL
models. - It is able to score how likely a trajectory is to come from the expert.
- The probability to achieve a goal is based on some "Goal Indicator methods" (using "Goal Likelihoods"). I must say I did not fully understand that part
- The
- The safety aspect relies on the fact that experts were driving safely and is formalized as a "plan reliability estimation":
-
"Besides using our model to make a best-effort attempt to reach a user-specified goal, the fact that our model produces explicit likelihoods can also be leveraged to test the reliability of a plan by evaluating whether reaching particular waypoints will result in human-like behavior or not."
- Based on this idea, a classification is performed to recognize safe and unsafe plans, based on the planning criterion.
-
-
-
About the baselines:
- Obviously, the proposed approach is compared to the two methods it aims at combining.
- About
MBRL
:1-
First, aforward dynamics model
is learnt using given observed expert data.- It does not imitate the expert preferred actions, but only models what is physically possible.
2-
The model then is used to plan a reachability tree through the free-space up to the waypoint while avoiding obstacles:- Playing with the
throttle
action, the search expands each state node and retains the50
closest nodes to the target waypoint. - The planner finally opts for the lowest-cost path that ends near the goal.
- Playing with the
-
"The task of evoking expert-like behavior is offloaded to the reward function, which can be difficult and time-consuming to craft properly."
- About
IL
: It used Conditional terms on States, leading toCILS
.S
forstate
: Instead of emitting low-level control commands (throttle
,steering
), it outputs set-points for somePID
-controller.C
forconditional
: To navigate at intersections, waypoints are classified into one of several directives: {Turn left
,Turn right
,Follow Lane
,Go Straight
}.- This is inspired by "End-to-end driving via conditional imitation learning" - (Codevilla et al. 2018) - detailed below.
"Conditional Vehicle Trajectories Prediction in CARLA Urban Environment"
Click to expand
Some figures:
End-to- Mid approach: 3 inputs with different levels of abstraction are used to predict the future positions on a fixed 2s -horizon of the ego vehicle and the neighbours. The ego trajectory is be implemented by an external PID controller - Therefore, not end-to- end . Source. |
The past 3D-bounding boxes of the road users in the current reference are projected back in the current camera space. The past positions of ego and other vehicles are projected into some grid-map called proximity map . The image and the proximity map are concatenated to form context feature vector C . This context encoding is concatenated with the ego encoding, then fed into branches corresponding to the different high-level goals - conditional navigation goal . Source. |
Illustration of the distribution shift in imitation learning. Source. |
VisualBackProp highlights the image pixels which contributed the most to the final results - Traffic lights and their colours are important, together with highlights lane markings and curbs when there is a significant lateral deviation. Source. |
Authors: Buhet, T., Wirbel, E., & Perrotton, X.
- Previous works:
- "Imitation Learning for End to End Vehicle Longitudinal Control with Forward Camera" - (George, Buhet, Wirbel, Le-Gall, & Perrotton, 2018).
- "End to End Vehicle Lateral Control Using a Single Fisheye Camera" (Toromanoff, M., Wirbel, E., Wilhelm, F., Vejarano, C., Perrotton, X., & Moutarde, F. 2018).
- One term: "End-To-Middle".
- It is opposed to "End-To-End", i.e. it does not output "end" control signals such as
throttle
orsteering
but rather some desired trajectory, i.e. a mid-level representation.- Each trajectory is described by two polynomial functions (one for
x
, the other fory
), therefore the network has to predict a vector (x0
, ...,x4
,y0
, ...,y4
) for each vehicle. - The desired ego-trajectory is then implemented by an external controller (
PID
). Therefore, notend-to-end
.
- Each trajectory is described by two polynomial functions (one for
- Advantages of
end-to-mid
: interpretability for the control part + less to be learnt by the net. - This approach is also an instance of "Direct perception":
-
"Instead of commands, the network predicts hand-picked parameters relevant to the driving (distance to the lines, to other vehicles), which are then fed to an independent controller".
-
- Small digression: if the raw perception measurements were first processed to form a mid-level input representation, the approach would be said
mid-to-mid
. An example is ChauffeurNet, detailed on this page as well.
- It is opposed to "End-To-End", i.e. it does not output "end" control signals such as
- About Ground truth:
- The expert demonstrations do not come from human recordings but rather from
CARLA
autopilot. 15
hours of driving inTown01
were collected.- As for human demonstrations, no annotation is needed.
- The expert demonstrations do not come from human recordings but rather from
- One term: "Conditional navigation goal".
- Together with the RGB images and the past positions, the network takes as input a navigation command to describe the desired behaviour of the ego vehicle at intersections.
- Hence, the future trajectory of the ego vehicle is conditioned by a navigation command.
- If the ego-car is approaching an intersection, the goal can be
left
,right
orcross
, else the goal is tokeep lane
. - That means
lane-change
is not an option.
- If the ego-car is approaching an intersection, the goal can be
-
"The last layers of the network are split into branches which are masked with the current navigation command, thus allowing the network to learn specific behaviours for each goal".
- Three ingredients to improve vanilla end-to-end imitation learning (
IL
):1
- Mix ofhigh
andlow
-level input (i.e. hybrid input):- Both raw signal (images) and partial environment abstraction (navigation commands) are used.
2
- Auxiliary tasks:- One head of the network predicts the future trajectories of the surrounding vehicles.
- It differs from the primary task which should decide the
2s
-ahead trajectory for the ego car. - Nevertheless, this secondary task helps: "Adding the neighbours prediction makes the ego prediction more compliant to traffic rules."
- It differs from the primary task which should decide the
- This refers to the concept of "Privileged learning":
-
"The network is partly trained with an auxiliary task on a ground truth which is useful to driving, and on the rest is only trained for IL".
-
- One head of the network predicts the future trajectories of the surrounding vehicles.
3
- Label augmentation:- The main challenge of
IL
is the difference between train and online test distributions. This is due to the difference betweenOpen-loop
control: decisions are not implemented.Close-loop
control: decisions are implemented, and the vehicle can end in a state absent from the train distribution, potentially causing "error accumulation".
- Data augmentation is used to reduce the gap between train and test distributions.
- Classical randomization is combined with
label augmentation
: data similar to failure cases is generated a posteriori.
- Classical randomization is combined with
- Three findings:
-
"There is a significant gap in performance when introducing the augmentation."
-
"The effect is much more noticeable on complex navigation tasks." (Errors accumulate quicker).
-
"Online test is the real significant indicator for
IL
when it is used for active control." (The common offline evaluation metrics may not be correlated to the online performance).
- The main challenge of
- Baselines:
- Conditional Imitation learning (
CIL
): "End-to-end driving via conditional imitation learning" video - (Codevilla, F., MΓΌller, M., LΓ³pez, A., Koltun, V., & Dosovitskiy, A. 2017).CIL
produces instantaneous commands.
- Conditional Affordance Learning (
CAL
): "Conditional affordance learning for driving in urban environments" video - (Sauer, A., Savinov, N. & Geiger, A. 2018).CAL
produces "affordances" which are then given to a controller.
- Conditional Imitation learning (
- One word about the choice of the simulator.
- A possible alternative to CARLA could be DeepDrive or the
LGSVL
simulator developed by the Advanced Platform Lab at the LG Electronics America R&D Centre. This looks promising.
- A possible alternative to CARLA could be DeepDrive or the
"Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control"
-
[
2019
] [π] [ πOxford University
] -
[
uncertainty-aware decision
,Bayesian inference
,CARLA
]
Click to expand
One figure:
The trust or uncertainty in one decision can be measured based on the probability mass function around its mode. Source. |
The measures of uncertainty based on mutual information can be used to issue warnings to the driver and perform safety / emergency manoeuvres. Source. |
As noted by the authors: while the variance can be useful in collision avoidance, the wide variance of HMC causes a larger proportion of trajectories to fall outside of the safety boundary when a new weather is applied. Source. |
Authors: Michelmore, R., Wicker, M., Laurenti, L., Cardelli, L., Gal, Y., & Kwiatkowska, M
- One related work:
- NVIDIAβs
PilotNet
[DAVE-2
] where expert demonstrations are used together with supervised learning to map from images (front camera) to steering command. - Here, human demonstrations are collected in the CARLA simulator.
- NVIDIAβs
- One idea: use distribution in weights.
- The difference with
PilotNet
is that the neural network applies the "Bayesian" paradigm, i.e. each weight is described by a distribution (not just a single value). - The authors illustrate the benefits of that paradigm, imagining an obstacle in the middle of the road.
- The Bayesian controller may be uncertain on the steering angle to apply (e.g. a
2
-tail orM
-shape distribution). - A first option is to sample angles, which turns the car either right or left, with equal probability.
- Another option would be to simply select the mean value of the distribution, which aims straight at the obstacle.
- The motivation of this work is based on that example: "derive some precise quantitative measures of the BNN uncertainty to facilitate the detection of such ambiguous situation".
- The Bayesian controller may be uncertain on the steering angle to apply (e.g. a
- The difference with
- One definition: "real-time decision confidence".
- This is the probability that the BNN controller is certain of its decision at the current time.
- The notion of trust can therefore be introduced: the idea it to compute the probability mass in a
Ξ΅
βball around the decisionΟ
(observation
) and classify it as certain if the resulting probability is greater than a threshold.- It reminds me the concept of
trust-region
optimisation in RL. - In extreme cases, all actions are equally distributed,
Ο
(observation
) has a very high variance, the agent does not know what to do (no trust) and will randomly sample an action.
- It reminds me the concept of
- How to get these estimates? Three Bayesian inference methods are compared:
- Monte Carlo dropout (
MCD
). - Mean-field variational inference (
VI
). - Hamiltonian Monte Carlo (
HMC
).
- Monte Carlo dropout (
- What to do with this information?
-
"This measure of uncertainty can be employed together with commonly employed measures of uncertainty, such as mutual information, to quantify in real time the degree that the model is confident in its predictions and can offer a notion of trust in its predictions."
- I did not know about "mutual information" and liked the explanation of Wikipedia about the link of
MI
toentropy
andKL-div
.- I am a little bit confused: in what I read, the
MI
is function of two random variables. What are they here? The authors rather speak about the predictive distribution exhibited by the predictive distribution.
- I am a little bit confused: in what I read, the
- I did not know about "mutual information" and liked the explanation of Wikipedia about the link of
- Depending on the uncertainty level, several actions are taken:
mutual information warnings
slow down the vehicle.standard warnings
slow down the vehicle and alert the operator of potential hazard.severe warnings
cause the car to safely brake and ask the operator to take control back.
-
- Another definition: "probabilistic safety", i.e. the probability that a BNN controller will keep the car "safe".
- Nice, but what is "safe"?
- It all relies on the assumption that expert demonstrations were all "safe", and measures the how much of the trajectory belongs to this "safe set".
- I must admit I did not fully understand the measure on "safety" for some continuous trajectory and discrete demonstration set:
- A car can drive with a large lateral offset from the demonstration on a wide road while being "safe", while a thin lateral shift in a narrow street can lead to an "unsafe" situation.
- Not to mention that the scenario (e.g. configuration of obstacles) has probably changed in-between.
- This leads to the following point with an interesting application for scenario coverage.
- One idea: apply changes in scenery and weather conditions to evaluate model robustness.
- To check the generalization ability of a model, the safety analysis is re-run (offline) with other weather conditions.
- As noted in conclusion, this offline safety probability can be used as a guide for active learning in order to increase data coverage and scenario representation in training data.
"Exploring the Limitations of Behavior Cloning for Autonomous Driving"
-
[
2019
] [π] [ποΈ] [] [ π
CVC, UAB, Barcelona
] [ πToyota
] -
[
distributional shift problem
,off-policy data collection
,CARLA
,conditional imitation learning
,residual architecture
,reproducibility issue
,variance caused by initialization and sampling
]
Click to expand
One figure:
Conditional Imitation Learning is extended with a ResNet architecture and Speed prediction (CILRS ). Source. |
Authors: Codevilla, F., Santana, E., Antonio, M. L., & Gaidon, A.
- One term: βCILRSβ = Conditional Imitation Learning extended with a ResNet architecture and Speed prediction.
- One Q&A: How to include in E2E learning information about the destination, i.e. to disambiguate imitation around multiple types of intersections?
- Add a high-level
navigational command
(e.g. take the next right, left, or stay in lane) to the tuple <observation
,expert action
> when building the dataset.
- Add a high-level
- One idea: learn to predict the ego speed (
mediated perception
) to address the inertia problem stemming from causal confusion (biased correlation between low speed and no acceleration - when the ego vehicle is stopped, e.g. at a red traffic light, the probability it stays static is indeed overwhelming in the training data).- A good video about Causal Confusion in Imitation Learning showing that "access to more information leads to worse generalisation under distribution shift".
- Another idea: The off-policy (expert) driving demonstration is not produced by a human, but rather generated from an omniscient "AI" agent.
- One quote:
"The more common the vehicle model and color, the better the trained agent reacts to it. This raises ethical challenges in automated driving".
"Conditional Affordance Learning for Driving in Urban Environments"
-
[
2018
] [π] [ποΈ] [ποΈ] [] [ π
CVC, UAB, Barcelona
] [ πToyota
] -
[
CARLA
,end-to-mid
,direct perception
]
Click to expand
Some figures:
Examples of affordances, i.e. attributes of the environment which limit the space of allowed actions. A1 , A2 and A3 are predefined observation areas. Source. |
The presented direct perception method predicts a low-dimensional intermediate representation of the environment - affordance - which is then used in a conventional control algorithm. The affordance is conditioned for goal-directed navigation, i.e. before each intersection, it receives an instruction such as go straight , turn left or turn right . Source. |
The feature maps produced by a CNN feature extractor are stored in a memory and consumed by task-specific layers (one affordance has one task block). Every task block has its own specific temporal receptive field - it decides how much of the memory it needs. This figure also illustrates how the navigation command is used as switch between trained submodules. Source. |
Authors: Sauer, A., Savinov, N., & Geiger, A.
- One term: "Direct perception" (
DP
):- The goal of
DP
methods is to predict a low-dimensional intermediate representation of the environment which is then used in a conventional control algorithm to manoeuvre the vehicle. - With this regard,
DP
could also be saidend-to-
mid
. The mapping to learn is less complex thanend-to-
end
(from raw input to controls). DP
is meant to combine the advantages of two other commonly-used approaches: modular pipelinesMP
andend-to-end
methods such as imitation learningIL
or model-freeRL
.- Ground truth affordances are collected using
CARLA
. Several augmentations are performed.
- The goal of
- Related work on affordance learning and direct perception.
Deepdriving
: Learning affordance for direct perception in autonomous driving by (Chen, Seff, Kornhauser, & Xiao, 2015).Deepdriving
works on highway.- Here, the idea is extended to urban scenarios (with traffic signs, traffic lights, junctions) considering a sequence of images (not just one camera frame) for temporal information.
- One term: "Conditional Affordance Learning" (
CAL
):- "Conditional": The actions of the agent are conditioned on a high-level command given by the navigation system (the planner) prior to intersections. It describes the manoeuvre to be performed, e.g.,
go straight
,turn left
,turn right
. - "Affordance": Affordances are one example of
DP
representation. They are attributes of the environment which limit the space of allowed actions. Only6
affordances are used forCARLA
urban driving:Distance to vehicle
(continuous).Relative angle
(continuous and conditional).Distance to centre-line
(continuous and conditional).Speed Sign
(discrete).Red Traffic Light
(discrete - binary).Hazard
(discrete - binary).- The
Class Weighted Cross Entropy
is the loss used for discrete affordances to put more weights on rare but important occurrences (hazard
occurs rarely compared totraffic light red
).
- The
- "Learning": A single neural network trained with multi-task learning (
MTL
) predicts all affordances in a single forward pass (~50ms
). It only takes a single front-facing camera view as input.
- "Conditional": The actions of the agent are conditioned on a high-level command given by the navigation system (the planner) prior to intersections. It describes the manoeuvre to be performed, e.g.,
- About the controllers: The path-velocity decomposition is applied. Hence two controllers are used in parallel:
- 1-
throttle
andbrake
- Based on the predicted affordances, a state is "rule-based" assigned among:
cruising
,following
,over limit
,red light
, andhazard stop
(all are mutually exclusive). - Based on this state, the longitudinal control signals are derived, using
PID
or threshold-predefined values. - It can handle traffic lights, speed signs and smooth car-following.
- Note: The Supplementary Material provides details insights on controller tuning (especially
PID
) forCARLA
.
- Based on the predicted affordances, a state is "rule-based" assigned among:
- 2-
steering
is controlled by a Stanley Controller, based on two conditional affordances:distance to centreline
andrelative angle
.
- 1-
- One idea: I am often wondering what timeout I should set when testing a scenario with
CARLA
. The author computes this time based on the length of the pre-defined path (which is actually easily accessible):-
"The time limit equals the time needed to reach the goal when driving along the optimal path at
10 km/h
"
-
- Another idea: Attention Analysis.
- For better understanding on how affordances are constructed, the attention of the
CNN
using gradient-weighted class activation maps (Grad-CAMs
). - This "visual explanation" reminds me another technique used in
end-to-end
approaches,VisualBackProp
, that highlights the image pixels which contributed the most to the final results.
- For better understanding on how affordances are constructed, the attention of the
- Baselines and results:
- Compared to
CARLA
-based Modular Pipeline (MP
), Conditional Imitation Learning (CIL
) and Reinforcement Learning (RL
),CAL
particularly excels in generalizing to the new town.
- Compared to
- Where to provide the high-level navigation conditions?
- The authors find that "conditioning in the network has several advantages over conditioning in the controller".
- In addition, in the net, it is preferable to use the navigation command as switch between submodules rather than an input:
-
"We observed that training specialized submodules for each directional command leads to better performance compared to using the directional command as an additional input to the task networks".
-
"Variational Autoencoder for End-to-End Control of Autonomous Driving with Novelty Detection and Training De-biasing"
-
[
2018
] [π] [ποΈ] [ποΈ] [ πMIT
] [ πToyota
] -
[
VAE
,uncertainty estimation
,sampling efficiency
,augmentation
]
Click to expand
Some figures:
One particular latent variable ^y is explicitly supervised to predict steering control. Another interesting idea: augmentation is based on domain knowledge - if a method used to the middle-view is given some left-view image, it should predict some correction to the right. Source. |
For each new image, empirical uncertainty estimates are computed by sampling from the variables of the latent space. These estimates lead to the D statistic that indicates whether an observed image is well captured by our trained model, i.e. novelty detection . Source. |
In a subsequent work, the VAE is conditioned onto the road topology. It serves multiple purposes such as localization and end-to-end navigation. The routed or unrouted map given as additional input goes toward the mid-to-end approach where processing is performed and/or external knowledge is embedded. Source. See this video temporal for evolution of the predictions. |
Authors: Amini, A., Schwarting, W., Rosman, G., Araki, B., Karaman, S., & Rus, D.
- One issue raised about vanilla
E2E
:- The lack a measure of associated confidence in the prediction.
- The lack of interpretation of the learned features.
- Having said that, the authors present an approach to both understand and estimate the confidence of the output.
- The idea is to use a Variational Autoencoder (
VAE
), taking benefit of its intermediate latent representation which is learnt in an unsupervised way and provides uncertainty estimates for every variable in the latent space via their parameters.
- One idea for the
VAE
: one particular latent variable is explicitly supervised to predict steering control.- The loss function of the
VAE
has therefore3
parts:- A
reconstruction
loss:L1
-norm between the input image and the output image. - A
latent
loss:KL
-divergence between the latent variables and a unit Gaussian, providing regularization for the latent space. - A
supervised latent
loss:MSE
between the predicted and actual curvature of the vehicleβs path.
- A
- The loss function of the
- One contribution: "Detection of novel events" (which have not been sufficiently trained for).
- To check if an observed image is well captured by the trained model, the idea is to propagate the
VAE
βs latent uncertainty through the decoder and compare the result with the original input. This is done by sampling (empirical uncertainty estimates). - The resulting pixel-wise expectation and variance are used to compute a sort of loss metric
D
(x
,Λx
) whose distribution for the training-set is known (approximated with a histogram). - The image
x
is classified asnovel
if this statistic is outside of the95th
percentile of the training distribution and the prediction can finally be "untrusted to produce reliable outputs". -
"Our work presents an indicator to detect novel images that were not contained in the training distribution by weighting the reconstructed image by the latent uncertainty propagated through the network. High loss indicates that the model has not been trained on that type of image and thus reflects lower confidence in the networkβs ability to generalize to that scenario."
- To check if an observed image is well captured by the trained model, the idea is to propagate the
- A second contribution: "Automated debiasing against learned biases".
- As for the novelty detection, it takes advantage of the latent space distribution and the possibility of sampling from the most representative regions of this space.
- Briefly said, the idea it to increase the proportion of rarer datapoints by dropping over-represented regions of the latent space to accelerate the training (sampling efficiency).
- This debiasing is not manually specified beforehand but based on learned latent variables.
- One reason to use single frame prediction (as opposed to
RNN
):-
""Note that only a single image is used as input at every time instant. This follows from original observations where models that were trained end-to-end with a temporal information (
CNN
+LSTM
) are unable to decouple the underlying spatial information from the temporal control aspect. While these models perform well on test datasets, they face control feedback issues when placed on a physical vehicle and consistently drift off the road.""
-
- One idea about augmentation (also met in the Behavioral Cloning Project of the Udacity Self-Driving Car Engineer Nanodegree):
-
"To inject domain knowledge into our network we augmented the dataset with images collected from cameras placed approximately
2
feet to the left and right of the main centre camera. We correspondingly changed the supervised control value to teach the model how to recover from off-centre positions."
-
- One note about the output:
-
"We refer to steering command interchangeably as the road curvature: the actual steering angle requires reasoning about road slip and control plant parameters that change between vehicles."
-
- Previous and further works:
- "Spatial Uncertainty Sampling for End-to-End control" - (Amini, Soleimany, Karaman, & Rus, 2018)
- "Variational End-to-End Navigation and Localization" - (Amini, Rosman, Karaman, & Rus, 2019)
- One idea: incorporate some coarse-grained roadmaps with raw perceptual data.
- Either unrouted (just containing the drivable roads).
Output
= continuous probability distribution over steering control. - Or routed (target road highlighted).
Output
= deterministic steering control to navigate.
- Either unrouted (just containing the drivable roads).
- How to evaluate the continuous probability distribution over steering control given the human "scalar" demonstration?
-
"For a range of
z
-scores over the steering control distribution we compute the number of samples within the test set where the true (human) control output was within the predicted range."
-
- About the training dataset:
25 km
of urban driving data.
- One idea: incorporate some coarse-grained roadmaps with raw perceptual data.
"ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst"
-
[
imitation learning
,distributional shift problem
]
Click to expand
Two figures:
Different layers composing the mid-level representation . Source. |
Training architecture around ChauffeurNet with the different loss terms, that can be grouped into environment and imitation losses. Source. |
Authors: Bansal, M., Krizhevsky, A., & Ogale, A.
- One term: "mid-level representation"
- The decision-making task (between
perception
andcontrol
) is packed into one single "learnable" module.- Input: the representation divided into several image-like layers:
Map features
such as lanes, stop signs, cross-walks...;Traffic lights
;Speed Limit
;Intended route
;Current agent box
;Dynamic objects
;Past agent poses
.- Such a representation is generic, i.e. independent of the number of dynamic objects and independent of the road geometry/topology.
- I discuss some equivalent representations seen at IV19.
- Output: intended route, i.e. the future poses recurrently predicted by the introduced
ChauffeurNet
model.
- Input: the representation divided into several image-like layers:
- This architecture lays between E2E (from
pixels
directly tocontrol
) and fully decomposed modular pipelines (decomposingplanning
in multiple modules). - Two notable advantages over E2E:
- It alleviates the burdens of learning perception and control:
- The desired trajectory is passed to a controls optimizer that takes care of creating the low-level control signals.
- Not to mention that different types of vehicles may possibly utilize different control outputs to achieve the same driving trajectory.
- Perturbations and input data from simulation are easier to generate.
- It alleviates the burdens of learning perception and control:
- The decision-making task (between
- One key finding: "pure imitation learning is not sufficient", despite the
60
days of continual driving (30 million
examples).- One quote about the "famous" distribution shift (deviation from the training distribution) in imitation learning:
"The key challenge is that we need to run the system closed-loop, where errors accumulate and induce a shift from the training distribution."
- The training data does not have any real collisions. How can the agent efficiently learn to avoid them if it has never been exposed during training?
- One solution consists in exposing the model to non-expert behaviours, such as collisions and off-road driving, and in adding extra loss functions.
- Going beyond vanilla cloning.
- Trajectory perturbation: Expose the learner to synthesized data in the form of perturbations to the expertβs driving (e.g. jitter the midpoint pose and heading)
- One idea for future works is to use more complex augmentations, e.g. with RL, especially for highly interactive scenarios.
- Past dropout: to prevent using the history to cheat by just extrapolating from the past rather than finding the underlying causes of the behaviour.
- Hence the concept of tweaking the training data in order to βsimulate the bad rather than just imitate the goodβ.
- Trajectory perturbation: Expose the learner to synthesized data in the form of perturbations to the expertβs driving (e.g. jitter the midpoint pose and heading)
- Going beyond the vanilla imitation loss.
- Extend imitation losses.
- Add environment losses to discourage undesirable behaviour, e.g. measuring the overlap of predicted agent positions with the non-road regions.
- Use imitation dropout, i.e. sometimes favour the environment loss over the imitation loss.
- Going beyond vanilla cloning.
"Imitating Driver Behavior with Generative Adversarial Networks"
-
[
2017
] [π] [] [ π
Stanford
] -
[
adversarial learning
,distributional shift problem
,cascading errors
,IDM
,NGSIM
,rllab
]
Click to expand
Some figures:
The state consists in 51 features divided into 3 groups: The core features include hand-picked features such as Speed , Curvature and Lane Offset . The LIDAR-like beams capture the surrounding objects in a fixed-size representation independent of the number of vehicles. Finally, 3 binary indicator features identify when the ego vehicle encounters undesirable states - collision , drives off road , and travels in reverse . Source. |
As for common adversarial approaches, the objective function in GAIL includes some sigmoid cross entropy terms. The objective is to fit Ο for the discriminator. But this objective function is non-differentiable with respect to ΞΈ . One solution is to optimize ΟΞΈ separately using RL . But what for reward function ? In order to drive ΟΞΈ into regions of the state-action space similar to those explored by the expert ΟE , a surrogate reward Λr is generated from D _Ο based on samples and TRPO is used to perform a policy update of ΟΞΈ . Source. |
Authors: Kuefler, A., Morton, J., Wheeler, T., & Kochenderfer, M.
- One term: the problem of "cascading errors" in behavioural cloning (
BC
).BC
, which treatsIL
as a supervised learning problem, tries to fit a model to a fixed dataset of expert state-action pairs. In other words,BC
solves a regression problem in which the policy parameterization is obtained by maximizing the likelihood of the actions taken in the training data.- But inaccuracies can lead the stochastic policy to states that are underrepresented in the training data (e.g., an ego-vehicle edging towards the side of the road). And datasets rarely contain information about how human drivers behave in such situations.
- The policy network is then forced to generalize, and this can lead to yet poorer predictions, and ultimately to invalid or unseen situations (e.g., off-road driving).
- "Cascading Errors" refers to this problem where small inaccuracies compound during simulation and the agent cannot recover from them.
- This issue is inherent to sequential decision making.
- As found by the results:
-
"The root-weighted square error results show that the feedforward
BC
model has the best short-horizon performance, but then begins to accumulate error for longer time horizons." -
"Only
GAIL
(and of courseIDM
+MOBIL
) are able to stay on the road for extended stretches."
-
- One idea:
RL
provides robustness against "cascading errors".RL
maximizes the global, expected return on a trajectory, rather than local instructions for each observation. Hence more appropriate for sequential decision making.- Also, the reward function
r
(s_t
,a_t
) is defined for all state-action pairs, allowing an agent to receive a learning signal even from unusual states. And these signals can establish preferences between mildly undesirable behaviour (e.g., hard braking) and extremely undesirable behaviour (e.g., collisions).- In contrast,
BC
only receives a learning signal for those states represented in a labelled, finite dataset. - Because handcrafting an accurate
RL
reward function is often difficult,IRL
seems promising. In addition, the imitation (via the recovered reward function) extends to unseen states: e.g. a vehicle that is perturbed toward the lane boundaries should know to return toward the lane centre.
- In contrast,
- Another idea: use
GAIL
instead ofIRL
:-
"IRL approaches are typically computationally expensive in their recovery of an expert cost function. Instead, recent work has attempted to imitate expert behaviour through direct policy optimization, without first learning a cost function."
- "Generative Adversarial Imitation Learning" (
GAIL
) implements this idea:-
"Expert behaviour can be imitated by training a policy to produce actions that a binary classifier mistakes for those of an expert."
-
"
GAIL
trains a policy to perform expert-like behaviour by rewarding it for βdeceivingβ a classifier trained to discriminate between policy and expert state-action pairs."
-
- One contribution is to extend
GAIL
to the optimization of recurrent neural networks (GRU
in this case).
-
- One concept: "Trust Region Policy Optimization".
- Policy-gradient
RL
optimization with "Trust Region" is used to optimize the agent's policyΟΞΈ
, addressing the issue of training instability of vanilla policy-gradient methods.-
"
TRPO
updates policy parameters through a constrained optimization procedure that enforces that a policy cannot change too much in a single update, and hence limits the damage that can be caused by noisy gradient estimates."
-
- But what reward function to apply? Again, we do not want to do
IRL
. - Some "surrogate" reward function is empirically derived from the discriminator. Although it may be quite different from the true reward function optimized by expert, it can be used to drive
ΟΞΈ
into regions of the state-action space similar to those explored byΟE
.
- Policy-gradient
- One finding: Should previous actions be included in the state
s
?-
"The previous action taken by the ego vehicle is not included in the set of features provided to the policies. We found that policies can develop an over-reliance on previous actions at the expense of relying on the other features contained in their input."
- But on the other hand, the authors find:
-
"The
GAIL
GRU
policy takes similar actions to humans, but oscillates between actions more than humans. For instance, rather than outputting a turn-rate of zero on straight road stretches, it will alternate between outputting small positive and negative turn-rates". -
"An engineered reward function could also be used to penalize the oscillations in acceleration and turn-rate produced by the
GAIL
GRU
".
-
- Some interesting interpretations about the
IDM
andMOBIL
driver models (resp. longitudinal and lateral control).- These commonly-used rule-based parametric models serve here as baselines:
-
"The Intelligent Driver Model (
IDM
) extended this work by capturing asymmetries between acceleration and deceleration, preferred free road and bumper-to-bumper headways, and realistic braking behaviour." -
"
MOBIL
maintains a utility function and 'politeness parameter' to capture intelligent driver behaviour in both acceleration and turning."
"Modeling pedestrian-cyclist interactions in shared space using inverse reinforcement learning"
- [
2020
] [π] [ πUniversity of British Columbia, Vancouver
] - [
max-entropy
,feature matching
]
Click to expand
Left: The contribution of each feature in the linear reward model differs between the Maximum Entropy (ME ) and the Feature Matching (FM ) algorithms. The FM algorithm is inconsistent across levels and has a higher intercept to parameter weight ratio compared with the estimated weights using the ME . Besides, why does it penalize all lateral distances and all speeds in these overtaking scenarios? Right: good idea how to visualize reward function for state of dimension 5 . Source. |
Authors: Alsaleh, R., & Sayed, T.
- In short: A simple but good illustration of
IRL
concepts using Maximum Entropy (ME
) and Feature Matching (FM
) algorithms.- It reminds me some experiments I talk about in this video: "From RL to Inverse Reinforcement Learning: Intuitions, Concepts + Applications to Autonomous Driving".
- Motivations, here:
1-
Work in non-motorized shared spaces, in this case a cyclist-pedestrian zone.- It means high degrees of freedom in motions for all participants.
- And offers complex road-user interactions (behaviours different than on conventional streets).
2-
Model the behaviour of cyclists in this share space using agent-based modelling.agent-based
as opposed to physics-based prediction models such associal force model
(SFM
) orcellular automata
(CA
).- The agent is trying to maximize an unknown
reward
function. - The recovery of that reward function is the core of the paper.
- First,
2
interaction types are considered:- The cyclist
following
the pedestrian. - The cyclist
overtaking
the pedestrian. - This distinction avoids the search for a 1-size-fits-all model.
- The cyclist
- About the
MDP
:- The cyclist is the
agent
. state
(absolute for the cyclist or relative compared to the pedestrian):longitudinal distance
lateral distance
angle difference
speed difference
cyclist speed
state
discretization:-
"Discretized for each interaction type by dividing each state feature into
6
levels based on equal frequency observation in each level." - This non-constant bin-width partially addresses the imbalanced dataset.
6^5
=7776
states.
-
action
:acceleration
yaw rate
action
discretization:-
"Dividing the acceleration into five levels based on equal frequency observation in each level."
5^2
=25
actions.
-
discount factor
:-
"A discount factor of
0.975
is used assuming10%
effect of the reward at a state3 sec
later (90
time steps) from the current state."
-
- The cyclist is the
- About the dataset.
- Videos of two streets in Vancouver, for a total of
39
hours. 228
cyclist and276
pedestrian trajectories are extracted.
- Videos of two streets in Vancouver, for a total of
IRL
.- The two methods assume that the
reward
is a linear combination offeatures
. Herefeatures
arestate
components. 1-
Feature Matching (FM
).- It matches the feature counts of the expert trajectories.
- The authors do not details the
max-margin
part of the algorithm.
2-
Maximum Entropy (ME
).- It estimates the
reward
function parameters by maximizing the likelihood of the expert demonstrations under the maximum entropy distribution. - Being probabilistic, it can account for non-optimal observed behaviours.
- It estimates the
- The two methods assume that the
- The recovered reward model can be used for prediction - How to measure the similarity between two trajectories?
1-
Mean Absolute Error
(MAE
).- It compares elements of same indices in the two sequences.
2-
Hausdorff Distance
.-
"It computes the largest distance between the simulated and the true trajectories while ignoring the time step alignment".
-
- Current limitations:
1-to-1
interactions, i.e. a single pedestrian/cyclist pair.- Low-density scenarios.
-
"[in future works] neighbor condition (i.e. other pedestrians and cyclists) and shared space density can be explicitly considered in the model."
"Accelerated Inverse Reinforcement Learning with Randomly Pre-sampled Policies for Autonomous Driving Reward Design"
- [
2019
] [π] [ πUC Berkeley
,Tsinghua University, Beijin
] - [
max-entropy
]
Click to expand
Instead of the costly RL optimisation step at each iteration of the vanilla IRL , the idea is to randomly sample a massive of policies in advance and then to pick one of them as the optimal policy. In case the sampled policy set does not contain the optimal policy, exploration of policy is introduced as well for supplement. Source. |
The approximation used in Kuderer et al. (2015) is applied here to compute the second term of gradient about the expected feature values. Source. |
Authors: Xin, L., Li, S. E., Wang, P., Cao, W., Nie, B., Chan, C., & Cheng, B.
- Reminder: Goal of
IRL
= Recover the reward function of an expert from demonstrations (here trajectories). - Motivations, here:
1-
Improve the efficiency of "weights updating" in the iterative routine ofIRL
.- More precisely: generating optimal policy using
model-free RL
suffers from low sampling efficiency and should therefore be avoided. - Hence the term "accelerated"
IRL
.
- More precisely: generating optimal policy using
2-
Embed human knowledge where restricting the search space (policy space).
- One idea: "Pre-designed policy subspace".
-
"An intuitive idea is to randomly sample a massive of policies in advance and then to pick one of them as the optimal policy instead of finding it via
RL
optimisation."
-
- How to construct the policies sub-space?
- Human knowledge about vehicle controllers is used.
- Parametrized linear controllers are implemented:
acc
=K1
βd
+K2
βv
+K3
*βa
, whereβ
are relative to the leading vehicle.- By sampling tuples of <
K1
,K2
,K3
> coefficients,1 million
(candidates) policies are generated to form the sub-space.
- Section about
Max-Entropy IRL
(btw. very well explained, as for the section introducingIRL
):-
"Ziebart et al. (2008) employed the principle of
maximum entropy
to resolve ambiguities in choosing trajectory distributions. This principle maximizes the uncertainty and leads to the distribution over behaviors constrained to matching feature expectations, while being no more committed to any particular trajectory than this constraint requires". -
"Maximizing the entropy of the distribution over trajectories subject to the feature constraints from expertβs trajectories implies to maximize the likelihood under the maximum entropy (exponential family) distributions. The problem is convex for
MDPs
and the optima can be obtained using gradient-based optimization methods". -
"The gradient [of the Lagrangian] is the difference between empirical feature expectations and the learners expected feature expectations."
-
- How to compute the second term of this gradient?
- It implies integrating over all possible trajectories, which is infeasible.
- As Kuderer et al. (2015), one can compute the feature values of the most likely trajectory as an approximation of the feature expectation.
-
"With this approximation, only the optimal trajectory associated to the optimal policy is needed, in contrast to regarding the generated trajectories as a probability distribution."
- About the features.
- As noted in my experiments about
IRL
, they serve two purposes (infeature-matching
-basedIRL
methods):1-
In the reward function: they should represent "things we want" and "things we do not want".2-
In the feature-match: to compare two policies based on their sampled trajectories, they should capture relevant properties of driving behaviours.
- Three features for this longitudinal acceleration task:
front-veh time headway
.long. acc
.deviation to speed limit
.
- As noted in my experiments about
- Who was the expert?
- Expert followed a modified linear car-following (
MLCF
) model.
- Expert followed a modified linear car-following (
- Results.
- Iterations are stopped after
11
loops. - It would have been interesting for comparison to test a "classic"
IRL
method whereRL
optimizations are applied.
- Iterations are stopped after
"Jointly Learnable Behavior and Trajectory Planning for Self-Driving Vehicles"
- [
2019
] [π] [ πUber
] - [
max-margin
]
Click to expand
Both behavioural planner and trajectory optimizer share the same cost function, whose weigth parameters are learnt from demonstration. Source. |
Authors: Sadat, A., Ren, M., Pokrovsky, A., Lin, Y., Yumer, E., & Urtasun, R.
- Main motivation:
- Design a decision module where both the behavioural planner and the trajectory optimizer share the same objective (i.e. cost function).
- Therefore "joint".
-
"[In approaches not-joint approaches] the final trajectory outputted by the trajectory planner might differ significantly from the one generated by the behavior planner, as they do not share the same objective".
- Requirements:
1-
Avoid time-consuming, error-prone, and iterative hand-tuning of cost parameters.- E.g. Learning-based approaches (
BC
).
- E.g. Learning-based approaches (
2-
Offer interpretability about the costs jointly imposed on these modules.- E.g. Traditional modular
2
-stage approaches.
- E.g. Traditional modular
- About the structure:
- The driving scene is described in
W
(desired route
,ego-state
,map
, anddetected objects
). ProbablyW
for "World"? - The behavioural planner (
BP
) decides two things based onW
:1-
A high-level behaviourb
.- The path to converge to, based on one chosen manoeuvre:
keep-lane
,left-lane-change
, orright-lane-change
. - The
left
andright
lane boundaries. - The obstacle
side assignment
: whether an obstacle should stay in thefront
,back
,left
, orright
to the ego-car.
- The path to converge to, based on one chosen manoeuvre:
2-
A coarse-level trajectoryΟ
.- The loss has also a regularization term.
- This decision is "simply" the
argmin
of the shared cost-function, obtained by sampling + selecting the best.
- The "trajectory optimizer" refines
Ο
based on the constraints imposed byb
.- E.g. an overlap cost will be incurred if the
side assignment
ofb
is violated.
- E.g. an overlap cost will be incurred if the
- A cost function parametrized by
w
assesses the quality of the selected <b
,Ο
> pair:cost
=w^T
.sub-costs-vec
(Ο
,b
,W
).- Sub-costs relate to safety, comfort, feasibility, mission completion, and traffic rules.
- The driving scene is described in
- Why "learnable"?
- Because the weight vector
w
that captures the importance of each sub-cost is learnt based on human demonstrations.-
"Our planner can be trained jointly end-to-end without requiring manual tuning of the costs functions".
-
- They are two losses for that objective:
1-
Imitation loss (withMSE
).- It applies on the <
b
,Ο
> produced by theBP
.
- It applies on the <
2-
Max-margin loss to penalize trajectories that have small cost and are different from the human driving trajectory.- It applies on the <
Ο
> produced by the trajectory optimizer. -
"This encourages the human driving trajectory to have smaller cost than other trajectories".
- It reminds me the
max-margin
method inIRL
where the weights of the reward function should make the expert demonstration better than any other policy candidate.
- It applies on the <
- Because the weight vector
"Adversarial Inverse Reinforcement Learning for Decision Making in Autonomous Driving"
- [
2019
] [π] [ πBerkeley, Chalmers University, Peking University
] [ πZenuity
] - [
GAIL
,AIRL
,action-masking
,augmented reward function
]
Click to expand
Author: Wang, P., Liu, D., Chen, J., & Chan, C.-Y.
In Adversarial IRL (AIRL ), the discriminator tries to distinguish learnt actions from demonstrated expert actions. Action-masking is applied, removing some action combinations that are not preferable, in order to reduce the unnecessary exploration. Finally, the reward function of the discriminator is extended with some manually-designed semantic reward to help the agent successfully complete the lane change and not to collide with other objects. Source. |
- One related concept (detailed further on this page): Generative Adversarial Imitation Learning (
GAIL
).- An imitation learning method where the goal is to learn a policy against a discriminator that tries to distinguish learnt actions from expert actions.
- Another concept used here: Guided Cost Learning (
GCL
).- A
Max-Entropy
IRL
method that makes use of importance sampling (IS
) to approximate the partition function (the term in the gradient of the log-likelihood function that is hard to compute since it involves an integral of over all possible trajectories).
- A
- One concept introduced: Adversarial Inverse Reinforcement Learning (
AIRL
).- It combines
GAIL
withGCL
formulation.-
"It uses a special form of the discriminator different from that used in
GAIL
, and recovers a cost function and a policy simultaneously as that inGCL
but in an adversarial way."
-
- Another difference is the use of a model-free
RL
method to compute the new optimal policy, instead of model-basedguided policy search
(GPS
) used inGCL
:-
"As the dynamic driving environment is too complicated to learn for the driving task, we instead use a model-free policy optimization method."
-
- One motivation of
AIRL
is therefore to cope with changes in the dynamics of environment and make the learnt policy more robust to system noises.
- It combines
- One idea: Augment the learned reward with some "semantic reward" term to improve learning efficiency.
- The motivation is to manually embed some domain knowledge, in the generator reward function.
-
"This should provide the agent some informative guidance and assist it to learn fast."
- About the task:
-
"The task of our focus includes a
longitudinal
decision β the selection of a target gap - and alateral
decision β whether to commit the lane change right now." - It is a rather "high-level" decision:
- A low-level controller, consisting of a
PID
for lateral control andsliding-mode
for longitudinal control, is the use to execute the decision.
- A low-level controller, consisting of a
- The authors use some
action-masking
technics where only valid action pairs are allowed to reduce the agentβs unnecessary exploration.
-
"Predicting vehicle trajectories with inverse reinforcement learning"
- [
2019
] [π] [ πKTH
] - [
max-margin
]
Click to expand
Author: Hjaltason, B.
About the features: The Ο are distances read from the origin of a vision field and are represented by red dotted lines. They take value in [0 , 1 ], where Οi = 1 means the dotted line does not hit any object and Οi = 0 means it hits an object at origin. In this case, two objects are inside the front vision field. Hence Ο1 = 0.4 and Ο2 = 0.6 . Source. |
Example of max-margin IRL . Source. |
- A good example of max-margin
IRL
:-
"There are two classes: The expert behaviour from data gets a label of
1
, and the "learnt" behaviours a label of-1
. The framework performs amax-margin
optimization step to maximise the difference between both classes. The result is an orthogonal vectorwi
from the max margin hyperplane, orthogonal to the estimated expert feature vectorΒ΅(ΟE)
". - From this new
R=w*f
, an optimal policy is derived usingDDPG
. - Rollouts are performed to get an estimated feature vector that is added to the set of "learnt" behaviours.
- The process is repeated until convergence (when the estimated values
w*Β΅(Ο)
are close enough).
-
- Note about the reward function:
- Here, r(
s, a, s'
) is also function of the action and the next state. - Here a post about different forms of reward functions.
- Here, r(
"A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress"
- [
2019
] [π] [ πUniversity of Georgia
] - [
reward engineering
]
Click to expand
Authors: Arora, S., & Doshi, P.
Trying to generalize and classify IRL methods. Source. |
I learnt about state visitation frequency: Ο (Ο )(s ) and the feature count expectation: Β΅ (Ο )(Ο ). Source. |
- This large review does not focus on
AD
applications, but it provides a good picture ofIRL
and can give ideas. Here are my take-aways. - Definition:
-
"Inverse reinforcement learning (
IRL
) is the problem of modeling the preferences of another agent using its observed behavior [hence class ofIL
], thereby avoiding a manual specification of its reward function."
-
- Potential
AD
applications ofIRL
:- Decision-making: If I find your underlying reward function, and I consider you as an expert, I can imitate you.
- Prediction: If I find your underlying reward function, I can imagine what you are going to do
- I start rethinking
Imitation Learning
. The goal ofIL
is to derive a policy based on some (expert) demonstrations.- Two branches emerge, depending on what structure is used to model the expert behaviour. Where is that model captured?
1-
In a policy.- This is a "direct approach". It includes
BC
and its variants. - The task is to learn that
state
->
action
mapping.
- This is a "direct approach". It includes
2-
In a reward function.- Core assumption: Each driver has an internal reward function and acts optimally w.r.t. it.
- The main task it to learn that reward function (
IRL
), which captures the expert's preferences. - The second step consists in deriving the optimal policy for this derived reward function.
As Ng and Russell put it: "The
reward function
, rather than thepolicy
, is the most succinct, robust, and transferable definition of the task"
- What happens if some states are missing in the demonstration?
1-
Direct methods will not know what to do. And will try to interpolate from similar states. This could be risky. (c.f.distributional shift
problem andDAgger
).-
"If a policy is used to describe a task, it will be less succinct since for each state we have to give a description of what the behaviour should look like". From this post
-
2-
IRL
methods acts optimally w.r.t. the underlying reward function, which could be better, since it is more robust.- This is particularly useful if we have an expert policy that is only approximately optimal.
- In other words, a policy that is better than the "expert" can be derived, while having very little exploration. This "minimal exploration" property is useful for tasks such as
AD
. - This is sometimes refers to as
Apprenticeship learning
.
- Two branches emerge, depending on what structure is used to model the expert behaviour. Where is that model captured?
- One new concept I learnt:
State-visitation frequency
(it reminds me some concepts of Markov chains).- Take a policy
Ο
. Let run the agent with it. Count how often it sees each state. This is called thestate-visitation frequency
(note it is for a specificΟ
). - Two ideas from there:
- Iterating until this
state-visitation frequency
stops changing yields theconverged frequency function
. - Multiplying that
converged state-visitation frequency
withreward
gives another perspective to thevalue function
.- The
value function
can now be seen as a linear combination of the expected feature countΒ΅
(Οk
)(Ο
) (also calledsuccessor feature
).
- The
- Iterating until this
- Take a policy
- One common assumption:
-> "The solution is a weighted linear combination of a set of reward features".
- This greatly reduces the search space.
-
"It allowed the use of feature expectations as a sufficient statistic for representing the value of trajectories or the value of an expertβs policy."
- Known
IRL
issues (and solutions):1-
This is an under-constrained learning problem.-
"Many reward functions could explain the observations".
- Among them, they are highly "degenerate" functions with all reward values zero.
- One solution is to impose constraints in the optimization.
- For instance try to maximize the sum of "value-margins", i.e. the difference between the value functions of the best and the second-best actions.
-
"
mmp
makes the solution policy have state-action visitations that align with those in the expertβs demonstration." -
"
maxent
distributes probability mass based on entropy but under the constraint of feature expectation matching."
- Another common constraint is to encourage the reward function to be as simple as possible, similar to
L1
regularization in supervised learning.
-
2-
Two incomplete models:2.1-
How to deal with incomplete/absent model of transition probabilities?2.2-
How to select the reward features?-
"[One could] use neural networks as function approximators that avoid the cumbersome hand-engineering of appropriate reward features".
-
-
"These extensions share similarity with
model-free RL
where thetransition
model andreward function
features are also unknown".
3-
How to deal with noisy demonstrations?- Most approaches assume a Gaussian noise and therefore apply Gaussian filters.
- How to classify
IRL
methods?- It can be useful to ask yourself two questions:
1-
What are the parameters of the HypothesisR
function`?- Most approaches use the "linear approximation" and try to estimate the weights of the linear combination of features.
2-
What for "Divergence Metric", i.e. how to evaluate the discrepancy to the expert demonstrations?-
"[it boils down to] a search in reward function space that terminates when the behavior derived from the current solution aligns with the observed behavior."
- How to measure the closeness or the similarity to the expert?
1-
Compare the policies (i.e. the behaviour).- E.g. how many <
state
,action
> pairs are matching? -
"A difference between the two policies in just one state could still have a significant impact."
- E.g. how many <
2-
Compare the value functions (they are defined over all states).- The authors mention the
inverse learning error
(ILE
) =||
V
(expert policy
)-
V
(learnt policy
)||
and thevalue loss
(use as a margin).
- The authors mention the
-
- Classification:
Margin-based
optimization: Learn a reward function that explains the demonstrated policy better than alternative policies by amargin
(addressIRL
's "solution ambiguity").- The intuition here is that we want a reward function that clearly distinguishes the optimal policy from other possible policies.
Entropy-based
optimization: Apply the "maximum entropy principle" (together with the "feature expectations matching" constraint) to obtain a distribution over potential reward functions.Bayesian
inference to deriveP
(^R
|demonstration
).- What for the likelihood
P
(<s
,a
> |ΛR
)? This probability is proportional to the exponentiated value function:exp
(Q
[s
,a
]).
- What for the likelihood
Regression
andclassification
.
- It can be useful to ask yourself two questions:
"Learning Reward Functions for Optimal Highway Merging"
Click to expand
Author: Weiss, E.
The assumption-free reward function that uses a simple polynomial form based on state and action values at each time step does better at minimizing both safety and mobility objectives, even though it does not incorporate human knowledge of typical reward function structures. About Pareto optimum: at these points, it becomes impossible to improve in the minimization of one objective without worsening our minimization of the other objective). Source. |
- What?
- A Project from the Stanford course (AA228/CS238 - Decision Making under Uncertainty). Examples of student projects can be found here.
- My main takeaway:
- A simple problem that illustrates the need for (learning more about)
IRL
.
- A simple problem that illustrates the need for (learning more about)
- The merging task is formulated as a simple
MDP
:- The state space has size
3
and is discretized:lat
+long
ego position andlong
position of the other car. - The other vehicle transitions stochastically (
T
) according to three simple behavioural models:fast
,slow
,average speed
driving. - The main contribution concerns the reward design: how to shape the reward function for this multi-objective (trade-off
safety
/efficiency
) optimization problem?
- The state space has size
- Two reward functions (
R
) are compared:-
1-
"The first formulation models rewards based on our prior knowledge of how we would expect autonomous vehicles to operate, directly encoding human values such assafety
andmobility
into this problem as a positive reward for merging, a penalty formerging close
to the other vehicle, and a penalty forstaying
in the on-ramp." -
2-
"The second reward function formulation assumes no prior knowledge of human values and instead comprises a simple degree-one polynomial expression for the components of thestate
and theaction
."- The parameters are tuned using a sort of grid search (no proper
IRL
).
- The parameters are tuned using a sort of grid search (no proper
-
- How to compare them?
- Since both
T
andR
are known, a planning (as opposed to learning) algorithm can be used to find the optimal policy. Herevalue iteration
is implemented. - The resulting agents are then evaluated based on two conflicting objectives:
-
"Minimizing the distance along the road at which point merging occurs and maximizing the
gap
between the two vehicles when merging."
-
- Next step will be proper
IRL
:-
"We can therefore conclude that there may exist better reward functions for capturing optimal driving policies than either the intuitive prior knowledge reward function or the polynomial reward function, which doesnβt incorporate any human understanding of costs associated with
safety
andefficiency
."
-
- Since both
"Game-theoretic Modeling of Traffic in Unsignalized Intersection Network for Autonomous Vehicle Control Verification and Validation"
- [
2019
] [π] [ πUniversity of Michigan and Bilkent University, Ankara
] - [
DAgger
,level-k control policy
]
Click to expand
Authors: Tian, R., Li, N., Kolmanovsky, I., Yildiz, Y., & Girard, A.
-
This paper builds on several works (also analysed further below):
- "Adaptive Game-Theoretic Decision Making for Autonomous Vehicle Control at Roundabouts" - (Tian, Li, Li, et al., 2019).
- "Game Theoretic Modeling of Vehicle Interactions at Unsignalized Intersections and Application to Autonomous Vehicle Control" - (N. Li, Kolmanovsky, Girard, & Yildiz, 2018).
- "Game-theoretic modeling of driver and vehicle interactions for verification and validation of autonomous vehicle control systems" - (N. Li et al., 2016).
-
Addressed problem: unsignalized intersections with heterogenous driving styles (
k
in [0
,1
,2
])- The problem is formulated using the level-
k
game-theory formalism (See analysed related works for more details).
- The problem is formulated using the level-
-
One idea: use imitation learning (
IL
) to obtain an explicit level-k
control policy.- A level-
k
policy is a mappingpi
: <ego state
,other's states
,ego k
>->
<sequence of ego actions
>. - The ego-agent maintains belief over the level
k
of other participants. These estimates are updated using maximum likelihood and Bayes rule. - A first attempt with supervised learning on a fix dataset (
behavioural cloning
) was not satisfying enough due to its drift shortcomings:-
"A small error may cause the vehicle to reach a state that is not exactly included in the dataset and, consequently, a large error may occur at the next time step."
-
- The solution is to also aggregate experience sampled from the currently learnt policy.
- The
DAgger
algorithm (Dataset Aggregation) is used in this work. - One point I did not understand: I am surprised that no initial "off-policy" demonstrations is used. The dataset
D
is initialized as empty. - The policy is represented by a neural network.
- The
- A level-
"Interactive Decision Making for Autonomous Vehicles in Dense Traffic"
-
[
2019
] [π] [ πHonda
] -
[
game tree search
,interaction-aware decision making
]
Click to expand
In the rule-based stochastic driver model describing the other agents, 2 thresholds are introduced: The reaction threshold , sampled from the range {β1.5m , 0.4m }, describes whether or not the agent reacts to the ego car. The aggression threshold , uniformly sampled {β2.2 , 1.1m }, describes how the agent reacts. Source. |
Two tree searches are performed: The first step is to identify a target merging gap based on the probability of a successful merge for each of them. The second search involves forward simulation and collision checking for multiple ego and traffic intentions. In practice the author found that ''the coarse tree - i.e. with intention only - was sufficient for long term planning and only one intention depth needed to be considered for the fine-grained search''. This reduces this second tree to a matrix game. Source. |
Author: Isele, D.
- Three motivations when working on decision-making for merging in dense traffic:
1-
Prefergame theory
approaches overrule-based
planners.- To avoid the
frozen robot
issue, especially in dense traffic. -
"If the ego car were to wait for an opening, it may have to wait indefinitely, greatly frustrating drivers behind it".
- To avoid the
2-
Prefer thestochastic game
formulation overMDP
.- Merging in dense traffic involves interacting with self-interested agents ("self-interested" in the sense that they want to travel as fast as possible without crashing).
-
"
MDPs
assume agents follow a set distribution which limits an autonomous agentβs ability to handle non-stationary agents which change their behaviour over time." -
"
Stochastic games
are an extension toMDPs
that generalize to multiple agents, each of which has its own policy and own reward function." - In other words,
stochastic games
seen more appropriate to model interactive behaviours, especially in the forward rollout of tree search:- An interactive prediction model based on the concept of
counterfactual reasoning
is proposed. - It describes how behaviour might change in response to ego agent intervention.
- An interactive prediction model based on the concept of
3-
Prefertree search
overneural networks
.-
"Working with the
game trees
directly produces interpretable decisions which are better suited to safety guarantees, and ease the debugging of undesirable behaviour." - In addition, it is possible to include stochasticity for the tree search.
- More precisely, the probability of a successful merge is computed for each potential gap based on:
- The traffic participantβs willingness to yield.
- The size of the gap.
- The distance to the gap (from our current position).
- More precisely, the probability of a successful merge is computed for each potential gap based on:
-
- How to model other participants, so that they act "intelligently"?
-
"In order to validate our behaviour we need interactive agents to test against. This produces a
chicken and egg
problem, where we need to have an intelligent agent to develop and test our agent. To address this problem, we develop a stochastic rule-based merge behaviour which can give the appearance that agents are changing their mind." - This merging-response driver model builds on the ideas of
IDM
, introducing two thresholds (c.f. figure):- One threshold governs whether or not the agent reacts to the ego car,
- The second threshold determines how the agent reacts.
-
"This process can be viewed as a rule-based variant of negotiation strategies: an agent proposes he/she go first by making it more dangerous for the other, the other agent accepts by backing off."
-
- How to reduce the computational complexity of the probabilistic game tree search, while keeping safely considerations ?
- The forward simulation and the collision checking are costly operations. Especially when the depth of the tree increases.
- Some approximations include reducing the number of actions (for both the ego- and the other agents), reducing the number of interacting participants and reducing the branching factor, as can been seen in the steps of the presented approach:
1-
Select an intention class based on a coarse search.
- the ego-actions are decomposed into asub-goal selection task
and awithin-sub-goal set of actions
.2-
Identify the interactive traffic participant.
- it is assumed that at any given time, the ego-agent interacts with only one other agent.3-
Predict other agentsβ intentions.
- working withintentions
, the continuous action space can be discretized. It reminds me the concept oftemporal abstraction
which reduces the depth of the search.4-
Sample and evaluate the ego intentions.
- a set of safe (absence of collision) ego-intentions can be generated and assessed.5-
Act, observe, and update our probability models.
- the probability of safe successful merge.
"Adaptive Robust Game-Theoretic Decision Making for Autonomous Vehicles"
-
[
2019
] [π] [ πUniversity of Michigan
] []
-
[
k-level strategy
,MPC
,interaction-aware prediction
]
Click to expand
The agent maintain belief on the k parameter for other vehicles and updates it at each step. Source. |
Authors: Sankar, G. S., & Han, K.
- One related work (described further below): Decision making in dynamic and interactive environments based on cognitive hierarchy theory: Formulation, solution, and application to autonomous driving by (Li, S., Li, N., Girard, A., & Kolmanovsky, I. 2019).
- One framework: "level-
k
game-theoretic framework".- It is used to model the interactions between vehicles, taking into account the rationality of the other agents.
- The agents are categorized into hierarchical structure of their cognitive abilities, parametrized with a reasoning depth
k
in [0
,1
,2
].- A level-
0
vehicle considers the other vehicles in the traffic scenario as stationary obstacles, hence being "aggressive". - A level-
1
agent assumes other agents are at level-0
. ...
- A level-
- This parameter
k
is what the agent must estimate to model the interaction with the other vehicles.
- One term: "disturbance set".
- This set, denoted
W
, describe the uncertainty in the position estimate of other vehicle (with somedelta
, similar to the variance in Kalman filters). - It should capture both the uncertainty about the transition model and the uncertainty about the driver models.
- This set is considered when taking action using a "feedback min-max strategy".
- I must admit I did not fully understand the concept. Here is a quote:
-
"The min-max strategy considers the worst-case disturbance affecting the behaviour/performance of the system and provides control actions to mitigate the effect of the worst-case disturbance."
- The important idea is to adapt the size of this
W
set in order to avoid over-conservative behaviours (compared to reachable-set methods).- This is done based on the confidence in the estimated driver model (probability distribution of the estimated
k
) for the other vehicles.- If the agent is sure that the other car follows model
0
, then it should be "fully" conservative. - If the agent is sure it follows level
1
, then it could relax its conservatism (i.e. reduce the size of the disturbance set) since it is taken into consideration.
- If the agent is sure that the other car follows model
- This is done based on the confidence in the estimated driver model (probability distribution of the estimated
- This set, denoted
- I would like to draw some parallels:
- With
(PO)MDP
formulation: for the use of a transition model (or transition function) that is hard to define. - With
POMDP
formulation: for the tracking of believes about the driver model (or intention) of other vehicles.- The estimate of the probability distribution (for
k
) is updated at every step.
- The estimate of the probability distribution (for
- With
IRL
: where the agent can predict the reaction of other vehicles assuming they act optimally w.r.t a reward function it is estimating. - With
MPC
: the choice of the optimal control following a receding horizon strategy.
- With
"Towards Human-Like Prediction and Decision-Making for Automated Vehicles in Highway Scenarios"
Click to expand
- Note:
- this
190
-page thesis is also referenced in the sections for prediction and planning. - I really like how the author organizes synergies between three modules that are split and made independent in most modular architectures:
(1)
driver model(2)
behaviour prediction(3)
decision-making
- this
Author: Sierra Gonzalez, D.
- Related work: there are close concepts to the approach of
(Kuderer et al., 2015)
referenced below. - One idea: encode the driving preferences of a human driver with a reward function (or cost function), mentioning a quote from Abbeel, Ng and Russell:
βThe reward function, rather than the policy or the value function, is the most succinct, robust, and transferable definition of a taskβ.
-
Other ideas:
- Use IRL to avoid the manual tuning of the parameters of the reward model. Hence learn a cost/reward function from demonstrations.
- Include dynamic features, such as the
time-headway
, in the linear combination of the cost function, to take the interactions between traffic participants into account. - Combine IRL with a trajectory planner based on "conformal spatiotemporal state lattices".
- The motivation is to deal with continuous state and action spaces and handle the presence of dynamic obstacles.
- Several advantages (I honestly did not understand that point): the ability to exploit the structure of the environment, to consider time as part of the state-space and respect the non-holonomic motion constraints of the vehicle.
-
One term: "planning-based motion prediction".
- The resulting reward function can be used to generate trajectory (for prediction), using optimal control.
- Simply put, it can be assumed that each vehicle in the scene behaves in the "risk-averse" manner encoded by the model, i.e. choosing actions leading to the lowest cost / highest reward.
- This method is also called "model-based prediction" since it relies on a reward function or on the models of an MDP.
- This prediction tool is not used alone but rather coupled with some DBN-based manoeuvre estimation (detailed in the section on prediction).
"Car-following method based on inverse reinforcement learning for autonomous vehicle decision-making"
-
[
2018
] [π] [ πTsinghua University, California Institute of Technology, Hunan University
] -
[
maximum-margin IRL
]
Click to expand
Kernel functions are used on the continuous state space to obtain a smooth reward function using linear function approximation. Source. |
As often, the divergence metric (to measure the gap between one candidate and the expert) is the expected value function estimated on sampled trajectories. Example of how to use 2 other candidate policies. I am still confused that each of their decision is based on a state seen by the expert, i.e. they are not building their own full trajectory. Source. |
Authors: Gao, H., Shi, G., Xie, G., & Cheng, B.
- One idea: A simple and "educationally relevant" application to
IRL
and a good implementation of the algorithm of (Ng A. & Russell S., 2000): Algorithms for Inverse Reinforcement Learning.- Observe human behaviours during a "car following" task, assume his/her behaviour is optimal w.r.t. an hidden reward function, and try to estimate that function.
- Strong assumption:
no lane-change
,no overtaking
,no traffic-light
. In other worlds, just concerned about the longitudinal control.
- Which
IRL
method?Maximum-margin
. Prediction aim at learning a reward function that explains the demonstrated policy better than alternative policies by a margin.- The "margin" is there to address IRL's solution ambiguity.
- Steps:
1-
Define a simple2d
continuous state spaces
= (s0
,s1
).s0
=ego-speed
divided into15
intervals (eachcentre
will serve to buildmeans
for Gaussian kernel functions).s1
=dist-to-leader
divided into36
intervals (same remark).- A normalization is additionally applied.
2-
Feature transformation: Map the2d
continuous state to a finite number of features using kernel functions.- I recommend this short video about feature transformation using kernel functions.
- Here, Gaussian radial kernel functions are used:
- Why "radial"? The closer the state to the centre of the kernel, the higher the response of the function. And the further you go, the larger the response "falls".
- Why "Gaussian"? Because the standard deviation describes how sharp that "fall" is.
- Note that this functions are
2d
:mean
= (the centre of onespeed
interval, the centre of onedist
interval).
- The distance of the continuous state
s
=
(s0
,s1
) to each of the15
*36
=540
means
s
(i
,j
) can be computed. - This gives
540
kernel featuresf
(i
,j
) = K(s
,s
(i
,j
)).
3-
The one-stepreward
is assumed to be linear combination of that features.- Given a policy, a trajectory can be constructed.
- This is a list of
states
. This list can be mapped to a list ofrewards
. - The discounted sum of this list leads to the
trajectory return
, seen as expectedValue function
.
- This is a list of
- One could also form
540
lists for this trajectory (one per kernel feature). Then reduce them bydiscounted_sum()
, leading to540
V_f
(i
,j
) per trajectory.- The
trajectory return
is then a simple the linear combination:theta
(i
,j
)*
V_f
(i
,j
).
- The
- This can be computed for the demonstrating expert, as well as for many other policies.
- Again, the task it to tune the weights so that the expert results in the largest values, against all possible other policies.
- Given a policy, a trajectory can be constructed.
4-
The goal is now to find the540
theta
(i
,j
) weights parameters solution of themax-margin
objective:- One goal:
costly single-step deviation
.- Try to maximize the smallest difference one could find.
- I.e. select the best non-expert-policy action and try to maximize the difference to the expert-policy action in each state.
max
[overtheta
]min
[overΟ
] of thesum
[overi
,j
] oftheta
(i
,j
)*
[f_candidate
(i
,j
) -f_expert
(i
,j
)].
- As often the
value function
serves as "divergence metric".
- Try to maximize the smallest difference one could find.
- One side heuristic to remove degenerate solutions:
-
"The reward functions with many small rewards are more natural and should be preferred". from here.
- Hence a regularization constraint (a constraint, not a loss like
L1
!) on thetheta
(i
,j
).
-
- The optimization problem with strict constraint is transformed into an optimization problem with "inequality" constraint.
- Violating constraints is allowed by penalized.
- As I understood from my readings, that relaxes the linear assumption in the case the true
reward function
cannot be expressed as a linear combination of the fixed basis functions.
- The resulting system of equations is solved here with Lagrange multipliers (linear programming was recommended in the orginal
max-margin
paper).
- One goal:
5-
Once thetheta
(i
,j
) are estimated, theR
can be expressed.
- About the other policy "candidates":
-
"For each optimal car-following state, one of the other car-following actions is randomly selected for the solution".
- In other words, in
V
(expert
)>
V
(other_candidates
) goal, "other_candidates
" refers to random policies. - It would have been interesting to have "better" competitors, for instance policies that are optional w.r.t. the current estimate of
R
function. E.g. learnt withRL
algorithms.- That would lead to an iterative process that stops when
R
converges.
- That would lead to an iterative process that stops when
-
"A Human-like Trajectory Planning Method by Learning from Naturalistic Driving Data"
-
[
2018
] [π] [ πPeking University
] [ πGroupe PSA
] -
[
sampling-based trajectory planning
]
Click to expand
Source. |
Authors: He, X., Xu, D., Zhao, H., Moze, M., Aioun, F., & Franck, G.
- One idea: couple learning and sampling for motion planning.
- More precisely, learn from human demonstrations (offline) how to weight different contributions in a cost function (as opposed to hand-crafted approaches).
- This cost function is then used for trajectory planning (online) to evaluate and select one trajectory to follow, among a set of candidates generated by sampling methods.
- One detail: the weights of the optimal cost function minimise the sum of [
prob
(candidate
) *similarities
(demonstration
,candidate
)].- It is clear to me how a cost can be converted to some probability, using
softmax()
. - But for the similarity measure of a trajectory candidate, how to compute "its distance to the human driven one at the same driving situation"?
- Should the expert car have driven exactly on the same track before or is there any abstraction in the representation of the situation?
- How can it generalize at all if the similarity is estimated only on the location and velocity? The traffic situation will be different between two drives.
- It is clear to me how a cost can be converted to some probability, using
- One quote:
"The more similarity (i.e. less distance) the trajectory has with the human driven one, the higher probability it has to be selected."
"Learning driving styles for autonomous vehicles from demonstration"
-
[
2015
] [π] [ πUniversity of Freiburg
] [ πBosch
] -
[
MaxEnt IRL
]
Click to expand
Source. |
Authors: Kuderer, M., Gulati, S., & Burgard, W.
- One important contribution: Deal with continuous features such as
integral of jerk
over the trajectory. - One motivation: Derive a cost function from observed trajectories.
- The trajectory object is first mapped to some feature vector (
speed
,acceleration
...).
- The trajectory object is first mapped to some feature vector (
- One Q&A: How to then derive a
cost
(orreward
) from these features?- The authors assume the cost function to be a linear combination of the features.
- The goal is then about learning the weights.
- They acknowledge in the conclusion that it may be a too simple model. Maybe neural nets could help to capture some more complex relations.
- One concept: "Feature matching":
-
"Our goal is to find a generative model
p
(traj
|weights
) that yields trajectories that are similar to the observations." - How to define the "Similarity"?
- The "features" serve as a measure of similarity.
-
- Another concept: "
ME-IRL
" = Maximum EntropyIRL
.- One issue: This "feature matching" formulation is ambiguous.
- There are potentially many (degenerated) solutions
p
(traj
|weights
). For instanceweights
= zeros.
- There are potentially many (degenerated) solutions
- One idea is to introduce an additional goal:
- In this case: "Among all the distributions that match features, they to select the one that maximizes the entropy."
- The probability distribution over trajectories is in the form
exp
(-cost[features(traj), ΞΈ]
), to model that agents are exponentially more likely to select trajectories with lower cost.
- One issue: This "feature matching" formulation is ambiguous.
- About the maximum likelihood approximation in
MaxEnt-IRL
:- The gradient of the Lagrangian cost function turns to be the difference between two terms:
1-
The empirical feature values (easy to compute from the recorded).2-
The expected feature values (hard to compute: it requires integrating over all possible trajectories).- An approximation is made to estimate the expected feature values: the authors compute the feature values of the "most" likely trajectory, instead of computing the expectations by sampling.
- Interpretation:
-
"We assume that the demonstrations are in fact generated by minimizing a cost function (
IOC
), in contrast to the assumption that demonstrations are samples from a probability distribution (IRL
)".
-
- The gradient of the Lagrangian cost function turns to be the difference between two terms:
- One related work:
- "Learning to Predict Trajectories of Cooperatively Navigating Agents" by (Kretzschmar, H., Kuderer, M., & Burgard, W., 2014).
"PLOP: Probabilistic poLynomial Objects trajectory Planning for autonomous driving"
-
[
2020
] [π] [[ποΈ](TO COME)] [ πValeo
] -
[
Gaussian mixture
,multi-trajectory prediction
,nuScenes
,A2D2
,auxiliary loss
]
Click to expand
The architecture has two main sections: an encoder to synthesize information and the predictor where we exploit it. Note that PLOP does not use the classic RNN decoder scheme for trajectory generation, preferring a single step version which predicts the coefficients of a polynomial function instead of the consecutive points. Also note the navigation command that conditions the ego prediction. Source. |
PLOP uses multimodal sensor data input: Lidar and camera . The map is accumulated over the past 2s , so 20 frames. It produces a multivariate gaussian mixture for a fixed number of K possible trajectories over a 4s horizon. Uncertainty and variability are handled by predicting vehicle trajectories as a probabilistic Gaussian Mixture models, constrained by a polynomial formulation. Source. |
Authors: Buhet, T., Wirbel, E., & Perrotton, X.
-
Motivations:
- The goal is to predicte multiple feasible future trajectories both for the ego vehicle and neighbors through a probabilistic framework.
- In addition in an
end-to-end
trainable fashion.
- In addition in an
- It builds on a previous work: "Conditional vehicle trajectories prediction in carla urban environment" - (Buhet, Wirbel, & Perrotton, 2019). See analysis further below.
- The trajectory prediction based on polynomial representation is upgraded from deterministic output to multimodal probabilistic output.
- It re-uses the navigation command input for the conditional part of the network, e.g.
follow
,left
,straight
,right
. - One main difference is the introduction of a new input sensor: Lidar.
- And adding a semantic segmentation auxiliary loss.
- The authors also reflect about what metrics is relevant for trajectory prediction:
-
"We suggest to use two additional criteria to evaluate the predictions errors, one based on the most confident prediction, and one weighted by the confidence [how alternative trajectories with non maximum weights compare to the most confident trajectory]."
-
- The goal is to predicte multiple feasible future trajectories both for the ego vehicle and neighbors through a probabilistic framework.
-
One term: "Probabilistic poLynomial Objects trajectory Planning" =
PLOP
. -
I especially like their review on related works about data-driven predictions (section taken from the paper):
SocialLSTM
: encodes the relations between close agents introducing a social pooling layer.- -Deterministic approaches derived from
SocialLSTM
:SEQ2SEQ
presents a newLSTM
-based encoder-decoder network to predict trajectories into an occupancy grid map.SocialGAN
andSoPhie
use generative adversarial networks to tackle uncertainty in future paths and augment the original set of samples.CS-LSTM
extendsSocialLSTM
using convolutional layers to encode the relations between the different agents.
ChauffeurNet
uses a sophisticated neural network with a complex high level scene representation (roadmap
,traffic lights
,speed limit
,route
,dynamic bounding boxes
, etc.) for deterministic ego vehicle trajectory prediction.
- Other works use a graph representation of the interactions between the agents in combination with neural networks for trajectory planning.
- Probabilistic approaches:
- Many works like
PRECOG
,R2P2
,Multiple Futures Prediction
,SocialGAN
include probabilistic estimation by adding a probabilistic framework at the end of their architecture producing multiple trajectories for ego vehicle, nearby vehicles or both. - In
PRECOG
,Rhinehart et al.
build a probabilistic model that explicitly models interactions between agents, using latent variables to model the plausible reactions of agents to each other, with a possibility to pre-condition the trajectory of the ego vehicle by a goal. MultiPath
also reuses an idea from object detection algorithms using trajectory anchors extracted from the training data for ego vehicle prediction.
- Many works like
-
About the auxiliary semantic segmentation task.
- Teaching the network to represent such semantic in its features improves the prediction.
-
"Our objective here is to make sure that in the
RGB
image encoding, there is information about the road position and availability, the applicability of the traffic rules (traffic sign/signal), the vulnerable road users (pedestrians, cyclists, etc.) position, etc. This information is useful for trajectory planning and brings some explainability to our model."
-
About interactions with other vehicles.
- The prediction for each vehicle does not have direct access to the sequence of history positions of others.
-
"The encoding of the interaction between vehicles is implicitly computed by the birdview encoding."
- The number of predicted trajectories is fixed in the network architecture.
K=12
is chosen.-
"It allows our architecture to be agnostic to the number of considered neighbors."
-
-
Multi-trajectory prediction in a probabilistic framework.
-
"We want to predict a fixed number
K
of possible trajectories for each vehicle, and associate them to a probability distribution overx
andy
:x
is the longitudinal axis,y
the lateral axis, pointing left." - About the Gaussian Mixture.
- Vehicle trajectories are predicted as probabilistic Gaussian Mixture models, constrained by a polynomial formulation: The mean of the distribution is expressed using a polynomial of degree
4
of time. -
"In the end, this representation can be interpreted as predicting
K
trajectories, each associated with a confidenceΟk
[mixture weights shared for all sampled points belonging to the same trajectory], with sampled points following a Gaussian distribution centered on (Β΅k,x,t
,Β΅k,y,t
) and with standard deviation (Οk,x,t
,Οk,y,t
)." -
"
PLOP
does not use the classicRNN
decoder scheme for trajectory generation, preferring a single step version which predicts the coefficients of a polynomial function instead of the consecutive points." - This offers a measure of uncertainty on the predictions.
- For the ego car, the probability distribution is conditioned by the navigation command.
- Vehicle trajectories are predicted as probabilistic Gaussian Mixture models, constrained by a polynomial formulation: The mean of the distribution is expressed using a polynomial of degree
- About the loss:
negative log-likelihood
over all sampled points of the ground truth ego and neighbour vehicles trajectories.- There is also the auxiliary
cross entropy loss
for segmentation.
-
-
Some findings:
- The presented model seems very robust to the varying number of neighbours.
- Finally, for
5
agents or more,PLOT
outperforms by a large margin allESP
andPRECOG
, on authors-defined metrics. -
"This result might be explained by our interaction encoding which is robust to the variations of
N
using only multiple birdview projections and our non-iterative single step trajectory generation."
- Finally, for
-
"Using
K = 1
approach yields very poor results, also visible in the training loss. It was an anticipated outcome due to the ambiguity of human behavior."
- The presented model seems very robust to the varying number of neighbours.
"Probabilistic Future Prediction for Video Scene Understanding"
-
[
2020
] [π] [ποΈ] [ποΈ (blog)] [ πUniversity of Cambridge
] [ πWayve
] -
[
multi frame
,multi future
,auxiliary learning
,multi-task
,conditional imitation learning
]
Click to expand
One main motivation is to supply the Control module (e.g. policy learnt via IL ) with a representation capable of modelling probability of future events. The Dynamics module produces such spatio-temporal representation, not directly from images but from learnt scene features. That embeddings, that are used by the Control in order to learn driving policy, can be explicitly decoded to future semantic segmentation , depth , and optical flow . Note that the stochasticity of the future is modelled with a conditional variational approach minimises the divergence between the present distribution (what could happen given what we have seen) and the future distribution (what we observe actually happens). During inference, diverse futures are generated by sampling from the present distribution . Source. |
![]() |
---|
There are many possible futures approaching this four-way intersection. Using 3 different noise vectors makes the model imagine different driving manoeuvres at an intersection: driving straight , turning left or turning right . These samples predict 10 frames, or 2 seconds into the future. Source. |
![]() |
---|
The differential entropy of the present distribution, characterizing how unsure the model is about the future is used. As we approach the intersection, it increases. Source. |
Authors: Hu, A., Cotter, F., Mohan, N., Gurau, C., & Kendall, A.
-
Motivations:
1-
Supply the control module (e.g.IL
) with an appropriate representation forinteraction-aware
anduncertainty-aware
decision-making, i.e. one capable of modelling probability of future events.- Therefore the policy should receive temporal features explicitly trained to predict the future.
- Motivation for that: It is difficult to learn an effective temporal representation by only using imitation error as a learning signal.
- Therefore the policy should receive temporal features explicitly trained to predict the future.
- Others:
2-
"multi frame
andmulti future
" prediction.- Perform prediction:
- ... based on multiple past frames (i.e. not a single one).
- ... and producing multiple possible outcomes (i.e. not deterministic).
- Predict the stochasticity of the future, i.e. contemplate multiple possible outcomes and estimate the multi-modal uncertainty.
- Perform prediction:
3-
Offer a differentiable / end-to-end trainable system, as opposed to system that reason over hand-coded representations.- I understand it as considering the loss of the
IL
part into the layers that create the latent representation.
- I understand it as considering the loss of the
4-
Cope with multi-agent interaction situations such as traffic merging, i.e. do not predict the behaviour of each actor in the scene independently.- For instance by jointly predicting ego-motion and motion of other dynamic agents.
5-
Do not rely on anyHD-map
to predict the static scene, to stay resilient toHD-map
errors due to e.g. roadworks.
-
auxiliary learning
: The loss used to train the latent representation is composed of three terms (c.f. motivation3-
):future-prediction
: weighted sum of futuresegmentation
,depth
andoptical flow
losses.probabilistic
:KL
-divergence between thepresent
and thefuture
distributions.control
: regression for future time-steps up to someFuture control horizon
.
-
Let's explore some ideas behinds these three components.
-
1-
Temporal video encoding: How to build a temporal and visual representation?-
What should be predicted?
-
"Previous work on probabilistic future prediction focused on trajectory forecasting [DESIRE, Lee et al. 2017, Bhattacharyya et al. 2018, PRECOG, Rhinehart et al. 2019] or were restricted to single-frame image generation and low resolution (64x64) datasets that are either simulated (Moving MNIST) or with static scenes and limited dynamics."
-
"Directly predicting in the high-dimensional space of image pixels is unnecessary, as some details about the appearance of the world are irrelevant for planning and control."
- Instead, the task is to predict a more complete scene representation with
segmentation
,depth
, andflow
, two seconds in the future.
-
-
What should the
temporal module
process?- The temporal model should learn the spatio-temporal features from perception encodings [as opposed to RGB images].
- These encodings are "scene features" extracted from images by a
Perception
module. They constitute a more powerful and compact representation compared to RGB images.
-
How does the
temporal module
look like?-
"We propose a new spatio-temporal architecture that can learn hierarchically more complex features with a novel 3D convolutional structure incorporating both local and global space and time context."
-
-
The authors introduce a so-called
Temporal Block
module for temporal video encoding.- These
Temporal Block
should help to learn hierarchically more complex temporal features. With two main ideas: 1-
Decompose the convolutional filters and play with all possible configuration.-
"Learning
3D
filters is hard. Decomposing into two subtasks helps the network learn more efficient." -
"State-of-the-art works decompose
3D
filters into spatial and temporal convolutions. The model we propose further breaks down convolutions into many space-time combinations and context aggregation modules, stacking them together in a more complex hierarchical representation."
-
2-
Incorporate the "global context" in the features (I did not fully understand that).- They concatenate some local features based on
1x1x1
compression with some global features extracted withaverage pooling
. -
"By pooling the features spatially and temporally at different scales, each individual feature map also has information about the global scene context, which helps in ambiguous situations."
- They concatenate some local features based on
- These
-
-
2-
Probabilistic prediction: how to generate multiple futures?-
"There are various reasons why modelling the future is incredibly difficult: natural-scene data is rich in details, most of which are irrelevant for the driving task, dynamic agents have complex temporal dynamics, often controlled by unobservable variables, and the future is inherently uncertain, as multiple futures might arise from a unique and deterministic past."
- The idea is that the uncertainty of the future can be estimated by making the prediction probabilistic.
-
"From a unique past in the real-world, many futures are possible, but in reality we only observe one future. Consequently, modelling multi-modal futures from deterministic video training data is extremely challenging."
- Another challenge when trying to learn a multi-modal prediction model:
-
"If the network predicts a plausible future, but one that did not match the given training sequence, it will be heavily penalised."
-
-
"Our work addresses this by encoding the future state into a low-dimensional
future distribution
. We then allow the model to have a privileged view of the future through the future distribution at training time. As we cannot use the future at test time, we train apresent distribution
(using only the current state) to match thefuture distribution
through aKL
-divergence loss. We can then sample from the present distribution during inference, when we do not have access to the future."
-
- To put it another way, two probability distributions are modelled, in a conditional variational approach:
- A present distribution
P
, that represents all what could happen given the past context. - A future distribution
F
, that represents what actually happened in that particular observation.
- A present distribution
-
[Learning to align the
present distribution
with thefuture distribution
] "As the future is multimodal, different futures might arise from a unique past contextzt
. Each of these futures will be captured by the future distributionF
that will pull the present distributionP
towards it." - How to evaluate predictions?
-
"Our probabilistic model should be accurate, that is to say at least one of the generated future should match the ground truth future. It should also be diverse".
- The authors use a diversity distance metric (
DDM
), which measures both accuracy and diversity of the distribution.
-
- How to quantify uncertainty?
- The framework can automatically infer which scenes are unusual or unexpected and where the model is uncertain of the future, by computing the differential entropy of the
present distribution
. - This is useful for understanding edge-cases and when the model needs to "pay more attention".
- The framework can automatically infer which scenes are unusual or unexpected and where the model is uncertain of the future, by computing the differential entropy of the
-
-
3-
The rich spatio-temporal features explicitly trained to predict the future are used to learn a driving policy.-
Conditional Imitation Learning is used to learn
speed
andsterring
controls, i.e. regressing to the expert's true control actions {v
,ΞΈ
}.- One reason is that it is immediately transferable to the real world.
-
From the ablation study, it seems to highly benefit from both:
1-
Thetemporal features
."It is too difficult to forecast how the future is going to evolve with a single image".
2-
The fact that these features are capable ofprobabilistic predictions
.- Especially for multi-agent interaction scenarios.
-
About the training set:
-
"We address the inherent dataset bias by sampling data uniformly across lateral and longitudinal dimensions. First, the data is split into a histogram of bins by
steering
, and subsequently byspeed
. We found that weighting each data point proportionally to the width of the bin it belongs to avoids the need for alternative approaches such as data augmentation."
-
-
-
One exciting future direction:
- For the moment, the
control
module takes the representation learned from dynamics models. And ignores the predictions themselves.- By the way, why are predictions, especially for the ego trajectories, not conditionned on possible actions?
- It could use these probabilistic embedding capable of predicting multi-modal and plausible futures to generate imagined experience to train a policy in a model-based
RL
. - The design of the
reward
function from the latent space looks challenging at first sight.
- For the moment, the
"Efficient Behavior-aware Control of Automated Vehicles at Crosswalks using Minimal Information Pedestrian Prediction Model"
-
[
2020
] [π] [ πUniversity of Michigan
,University of Massachusetts
] -
[
interaction-aware decision-making
,probabilistic hybrid automaton
]
Click to expand
The pedestrian crossing behaviour is modelled as a probabilistic hybrid automaton. Source. |
The interaction is captured inside a gap-acceptance model: the pedestrian evaluates the available time gap to cross the street and either accept the gap by starting to cross or reject the gap by waiting at the crosswalk. Source. |
The baseline controller used for comparison is a finite state machine (FSM ) with four states. Whenever a pedestrian starts walking to cross the road, the controller always tries to stop, either by yielding or through hard stop . Source. |
Authors: Jayaraman, S. K., Jr, L. P. R., Yang, X. J., Pradhan, A. K., & Tilbury, D. M.
-
Motivations:
- Scenario: interaction with a pedestrian
approaching
/crossing
/waiting
at a crosswalk. 1-
A (1.1
) simple and (1.2
) interaction-aware pedestrianprediction
model.- That means no requirement of extensive amounts of data.
-
"The crossing model as a hybrid system with a gap acceptance model that required minimal information, namely pedestrian's
position
andvelocity
".- It does not require information about pedestrian
actions
orpose
. - It builds on "Analysis and prediction of pedestrian crosswalk behavior during automated vehicle interactions" by (Jayaraman, Tilbury, Yang, Pradhan, & Jr, 2020).
- It does not require information about pedestrian
2-
Effectively incorporating thesepredictions
in acontrol
framework- The idea is to first forecast the position of the pedestrian using a pedestrian model, and then react accordingly.
3-
Be efficient on bothwaiting
andapproaching
pedestrian scenarios.- Assuming always a
crossing
may lead to over-conservative policies. -
"[in simulation] only a fraction of pedestrians (
80%
) are randomly assigned the intention to cross the street."
- Assuming always a
- Scenario: interaction with a pedestrian
-
Why are
CV
andCA
prediction models not applicable?-
"At crosswalks, pedestrian behavior is much more unpredictable as they have to wait for an opportunity and decide when to cross."
- Longer durations are needed.
1-
Interaction must be taken into account.2-
The authors decide to model pedestrians as a hybrid automaton that switches between discrete actions.
-
-
One term: Behavior-aware Model Predictive Controller (
B-MPC
)1-
The pedestrian crossing behaviour is modelled as a probabilistic hybrid automaton:- Four states:
Approach Crosswalk
,Wait
,Cross
,Walk away
. - Probabilistic transitions: using pedestrian's
gap acceptance
- hence capturing interactions.-
"What is the probability of accepting the current traffic gap?
-
"Pedestrians evaluate the available time gap to cross the street and either accept the gap by starting to cross or reject the gap by waiting at the crosswalk."
-
- Four states:
2-
The problem is formulated as a constrained quadratic optimization problem:- Cost:
success
(passing the crosswalk),comfort
(penalize jerk and sudden changes in acceleration),efficiency
(deviation from the reference speed). - Constraints: respect
motion model
, restrictvelocity
,acceleration
, as well asjerk
, and ensurecollision avoidance
. - Solver: standard quadratic program solver in
MATLAB
.
- Cost:
-
Performances:
- Baseline controller:
- Finite state machine (
FSM
) with four states:Maintain Speed
,Accelerate
,Yield
, andHard Stop
. -
"Whenever a pedestrian starts walking to cross the road, the controller always tries to stop, either by
yielding
or throughhard stop
." -
"The Boolean variable
InCW
, denotes the pedestrianβs crossing activity:InCW=1
from the time the pedestrian started moving laterally to cross until they completely crossed theAV
lane, andInCW=0
otherwise." - That means the baseline controller does not react at all to "non-crossing" cases since it never sees the pedestrian crossing laterally.
- Finite state machine (
-
"It can be seen that the
B-MPC
is more aggressive, efficient, and comfortable than the baseline as observed through the higher average velocity, lower average acceleration effort, and lower average jerk respectively."
- Baseline controller:
"Learning Predictive Models From Observation and Interaction"
-
[
2019
] [π] [ποΈ] [ πUniversity of Pennsylvania
,Stanford University
,UC Berkeley
] [ πHonda Research Institute
] -
[
visual prediction
,domain transfer
,nuScenes
,BDD100K
]
Click to expand
The idea is to learn a latent representation z that corresponds to the true action . The model can then perform joint training on the two kinds of data: it optimizes the likelihood of the interaction data, for which the action s are available, and observation data, for which the action s are missing. Hence the visual predictive model can predict the next frame xt+1 conditioned on the current frame xt and action learnt representation zt . Source. |
![]() |
---|
The visual prediction model is trained using two driving sets: action -conditioned videos from Boston and action -free videos from the Singapore. Frames from both subsets come from BDD100K and nuScenes datasets. Source. |
Authors: Schmeckpeper, K., Xie, A., Rybkin, O., Tian, S., Daniilidis, K., Levine, S., & Finn, C.
- On concrete industrial use-case:
-
"Imagine that a self-driving car company has data from a fleet of cars with sensors that record both
video
and the driverβsactions
in one city, and a second fleet of cars that only record dashboardvideo
, withoutaction
s, in a second city." -
"If the goal is to train an
action
-conditioned model that can be utilized to predict the outcomes of steeringaction
s, our method allows us to train such a model using data from both cities, even though only one of them hasaction
s."
-
- Motivations (mainly for robotics, but also AD):
- Generate predictions for complex tasks and new environments, without costly expert demonstrations.
- More precisely, learn an
action
-conditioned video predictive model from two kinds of data:1-
passive observations: [x0
,a1
,x1
...aN
,xN
].- Videos of another agent, e.g. a human, might show the robot how to use a tool.
- Observations represent a powerful source of information about the world and how actions lead to outcomes.
- A learnt model could also be used for
planning
andcontrol
, i.e. to plan coordinated sequences of actions to bring about desired outcomes. - But may suffer from large domain shifts.
2-
active interactions: [x0
,x1
...xN
].- Usually more expensive.
- Two challenges:
1-
Observations are not annotated with suitableaction
s: e.g. only access to the dashcam, not thethrottle
for instance.- In other words,
action
s are only observed in a subset of the data. - The goal is to learn from videos without
action
s, allowing it to leverage videos of agents for which the actions are unknown (unsupervised manner).
- In other words,
2-
Shift in the "embodiment" of the agent: e.g. robots' arms and humans' ones have physical differences.- The goal is to bridge the gap between the two domains (e.g.,
human arms
vs.robot arms
).
- The goal is to bridge the gap between the two domains (e.g.,
- What is learnt?
p
(xc+1:T
|x1:c
,a1:T
)- I.e. prediction of future frames conditioned on a set of
c
context frames and sequence of actions.
- What tests?
1-
Different environment within the same underlying dataset: driving inBoston
andSingapore
.2-
Same environment but different embodiment:humans
androbots
manipulate objects with different arms.
- What is assessed?
1-
Prediction quality (AD
test).2-
Control performance (robotics
test).
"Deep Learning-based Vehicle Behaviour Prediction For Autonomous Driving Applications: A Review"
-
[
2019
] [π] [ πUniversity of Warwick
] [ πJaguar Land Rover
] -
[
multi-modality prediction
]
Click to expand
The author propose new classification of behavioural prediction methods. Only deep learning approaches are considered and physics -based approaches are excluded. The criteria are about the input , ouput and deep learning method . Source. |
First criterion is about the input: What is the prediction based on? Important is to capture road structure and interactions while staying flexible in the representation (e.g. describe different types of intersections and work with varying numbers of target vehicles and surrounding vehicles ). Partial observability should be considered by design. Source. |
Second criterion is about the output: What is predicted? Important is to propagate the uncertainty from the input and consider multiple options (multi-modality). Therefore to reason with probabilities. Bottom - why multi-modality is important. Source. |
Authors: Mozaffari, S., Al-Jarrah, O. Y., Dianati, M., Jennings, P., & Mouzakitis, A.
- One mentioned review: (Lefèvre et al.) classifies vehicle (behaviour) prediction models to three groups:
1-
physics
-based- Use dynamic or kinematic models of vehicles, e.g. a constant velocity (
CV
) Kalman Filter model.
- Use dynamic or kinematic models of vehicles, e.g. a constant velocity (
2-
manoeuvre
-based- Predict vehicles' manoeuvres, i.e. a classification problem from a defined set.
3-
interaction
-aware- Consider interaction of vehicles in the input.
- About the terminology:
- "Target Vehicles" (
TV
) are vehicles whose behaviour we are interested in predicting. - The other are "Surrounding Vehicles" (
SV
). - The "Ego Vehicle" (
EV
) can be also considered as anSV
, if it is close enough toTV
s.
- "Target Vehicles" (
- Here, the authors ignore the
physics
-based methods and propose three criteria for comparison:1-
Input.- Track history of
TV
only. - Track history of
TV
andSV
s. - Simplified birdβs eye view.
- Raw sensor data.
- Track history of
2-
Output.- Intention
class
: From a set of pre-defined discrete classes, e.g.go straight
,turn left
, andturn right
. - Unimodal
trajectory
: Usually the one with highest likelihood or the average). - Intention-based
trajectory
: Predict the trajectory that corresponds to the most probable intention (first case). - Multimodal
trajectory
: Combine the previous ones. Two options, depending if the intention set is fixed or dynamically learnt:static
intention set: predict for each member of the set (an extension to intention-based trajectory prediction approaches).dynamic
intention set: due to dynamic definition of manoeuvres, they are prone to converge to a single manoeuvre or not being able to explore all the existing manoeuvres.
- Intention
3-
In-between (deep learning method).RNN
are used because of their temporal feature extracting power.CNN
are used for their **spatial feature extracting ability (especially with birdβs eye views).
- Important considerations for behavioural prediction:
- Traffic rules.
- Road geometry.
- Multimodality: there may exist more than one possible future behaviour.
- Interaction.
- Uncertainty: both
aleatoric
(measurement noise) andepistemic
(partial observability). Hence the prediction should be probabilistic. - Prediction horizon: approaches can serve different purposes based on how far in the future they predict (
short-term
orlong-term
future motion).
- Two methods I would like to learn more about:
social pooling
layers, e.g. used by (Deo & Trivedi, 2019):-
"A social tensor is a spatial grid around the target vehicle that the occupied cells are filled with the processed temporal data (e.g.,
LSTM
hidden state value) of the corresponding vehicle. It contains both the temporal dynamic of vehicles represented and spatial inter-dependencies among them."
-
graph
neural networks, e.g. (Diehl et al., 2019) or (Li et al., 2019):- Graph Convolutional Network (
GCN
). - Graph Attention Network (
GAT
).
- Graph Convolutional Network (
- Comments:
- Contrary to the object detection task, there is no benchmark for systematically evaluating previous studies on vehicle behaviour prediction.
- Urban scenarios are excluded in the comparison since
NGSIM I-80
andUS-101 highway
driving datasets are used. - Maybe the INTERACTION Datasetβ could be used.
- Urban scenarios are excluded in the comparison since
- The authors suggest embedding domain knowledge in the prediction, and call for practical considerations (industry-supported research).
-
"Factors such as environment conditions and set of traffic rules are not directly inputted to the prediction model."
-
"Practical limitations such as sensor impairments and limited computational resources have not been fully taken into account."
-
- Contrary to the object detection task, there is no benchmark for systematically evaluating previous studies on vehicle behaviour prediction.
"Multi-Modal Simultaneous Forecasting of Vehicle Position Sequences using Social Attention"
-
[
2019
] [π] [ πEcole CentraleSupelec
] [ πRenault
] -
[
multi-modality prediction
,attention mechanism
]
Click to expand
Two multi-head attention layers are used to account for social interactions between all vehicles. They are combined with LSTM layers to offer joint, long-range and multi-modal forecasts. Source. |
Source. |
Authors: Mercat, J., Gilles, T., Zoghby, N. El, Sandou, G., Beauvois, D., & Gil, G. P.
- Previous work:
"Social Attention for Autonomous Decision-Making in Dense Traffic"
by (Leurent, & Mercat, 2019), detailed on this page as well. - Motivations:
1-
joint
- Considering interactions between all vehicles.2-
flexible
- Independant of the number/order of vehicles.3-
multi-modal
- Considering uncertainty.4-
long-horizon
- Predicting over a long range. Here5s
on simple highway scenarios.5-
interpretable
- E.g. using the social attention coefficients.6-
long distance interdependencies
- The authors decide to exclude the spatial grid representations that "limit the zone of interest to a predefined fixed size and the spatial relation precision to the grid cell size".
- Main idea: Stack
LSTM
layers withsocial
multi-head
attention
layers.- More precisely, the model is broken into four parts:
1-
AnEncoder
processes the sequences of all vehicle positions (no information aboutspeed
,orientation
,size
orblinker
).2-
ASelf-attention
layer captures interactions between all vehicles using "dot product attention". It has "multiple head", each specializing on different interaction patterns, e.g."closest front vehicle in any lane"
.3-
APredictor
, usingLSTM
cells, forecasts the positions.- A second multi-head self-attention layer is placed here.
4-
A finalDecoder
produces sequences of Gaussians mixtures for each vehicle.-
"What is forecast is not a mixture of trajectory density functions but a sequence of position mixture density functions. There is a dependency between forecasts at time
tk
and at timetk+1
but no explicit link between the modes at those times."
-
- More precisely, the model is broken into four parts:
- Two quotes about multi-modality prediction:
-
"When considering multiple modes, there is a challenging trade-off to find between anticipating a wide diversity of modes and focusing on realistic ones".
-
"
VAE
andGANs
are only able to generate an output distribution with sampling and do not express aPDF
".
-
- Baselines used to compare the presented "Social Attention Multi-Modal Prediction" approach:
- Constant velocity (
CV
), that uses Kalman filters (hence single modality). - Convolutional Social Pooling (
CSP
), that uses convolutional social pooling on a coarse spatial grid. Six mixture components are used. - Graph-based Interaction-aware Trajectory Prediction (
GRIP
), that uses aspatial
andtemporal
graph representation of the scene.
- Constant velocity (
"MultiPath : Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction"
-
[
2019
] [π] [ πWaymo
] -
[
anchor
,multi-modality prediction
,weighted prediction
]
Click to expand
Source. |
A discrete set of intents is modelled as a set of K=3 anchor trajectories. Uncertainty is assumed to be unimodal given intent (here 3 intents are considered) while control uncertainty is modelled with a Gaussian distribution dependent on each waypoint state of an anchor trajectory. Such an example shows that modelling multiple intents is important. Source. |
Authors: Chai, Y., Sapp, B., Bansal, M., & Anguelov, D.
- One idea: "Anchor Trajectories".
- "Anchor" is a common idea in
ML
. Concrete applications of "anchor" methods forAD
includeFaster-RCNN
andYOLO
for object detections.- Instead of directly predicting the size of a bounding box, the
NN
predicts offsets from a predetermined set of boxes with particular height-width ratios. Those predetermined set of boxes are the anchor boxes. (explanation from this page).
- Instead of directly predicting the size of a bounding box, the
- One could therefore draw a parallel between the sizes of bounding boxes in
Yolo
and the shape of trajectories: they could be approximated with some static predetermined patterns and refined to the current context (the actual task of theNN
here).-
"After doing some clustering studies on ground truth labels, it turns out that most bounding boxes have certain height-width ratios." [explanation about Yolo from this page]
-
"Our trajectory anchors are modes found in our training data in state-sequence space via unsupervised learning. These anchors provide templates for coarse-granularity futures for an agent and might correspond to semantic concepts like
change lanes
, orslow down
." [from the presented paper]
-
- This idea reminds also me the concept of
pre-defined templates
used for path planning.
- "Anchor" is a common idea in
- One motivation: model multiple intents.
- This contrasts with the numerous approaches which predict one single most-likely trajectory per agent, usually via supervised regression.
- The multi-modality is important since prediction is inherently stochastic.
- The authors distinguish between
intent uncertainty
andcontrol uncertainty
(conditioned on intent).
- The authors distinguish between
- A Gaussian Mixture Model (
GMM
) distribution is used to model both types of uncertainty.-
"At inference, our model predicts a discrete distribution over the anchors and, for each anchor, regresses offsets from anchor waypoints along with uncertainties, yielding a Gaussian mixture at each time step."
-
- One risk when working with multi-modality: directly learning a mixture suffers from issues of "mode collapse".
- This issue is common in
GAN
where the generator starts producing limited varieties of samples. - The solution implemented here is to estimate the anchors a priori before fixing them to learn the rest of our parameters (as for
Faster-RCNN
andYolo
for instance).
- This issue is common in
- Second motivation: weight the several trajectory predictions.
- This contrasts with methods that randomly sample from a generative model (e.g.
CVAE
andGAN
), leading to an unweighted set of trajectory samples (not to mention the problem of reproducibility and analysis). - Here, a parametric probability distribution is directly predicted: p(
trajectory
|observation
), together with a compact weighted set of explicit trajectories which summarizes this distribution well.- This contrasts with methods that outputs a probabilistic occupancy grid.
- This contrasts with methods that randomly sample from a generative model (e.g.
- About the "top-down" representation, structured in a
3d
array:- The first
2
dimensions represent spatial locations in the top-down image -
"The channels in the depth dimension hold
static
and time-varying (dynamic
) content of a fixed number of previous time steps."- Static context includes
lane connectivity
,lane type
,stop lines
,speed limit
. - Dynamic context includes
traffic light states
over the past5
time-steps. - The previous positions of the different dynamic objects are also encoded in some depth channels.
- Static context includes
- The first
- One word about the training dataset.
- The model is trained via
imitation learning
by fitting the parameters to maximize the log-likelihood of recorded driving trajectories. -
"The balanced dataset totals
3.85 million
examples, contains5.75 million
agent trajectories and constitutes approximately200 hours
of (real-world) driving."
- The model is trained via
"SafeCritic: Collision-Aware Trajectory Prediction"
-
[
2019
] [π] [ πUniversity of Amsterdam
] [ πBMW
] -
[
Conditional GAN
]
Click to expand
The Generator predicts trajectories that are scored against two criteria: The Discriminator (as in GAN ) for accuracy (i.e. consistent with the observed inputs) and the Critic (the generator acts as an Actor) for safety . The random noise vector variable z in the Generator can be sampled from N (0 , 1 ) to sample novel trajectories. Source. |
Several features offered by the predictions of SafeCritic : accuracy, diversity, attention and safety. Source. |
Authors: van der Heiden, T., Nagaraja, N. S., Weiss, C., & Gavves, E.
- Main motivation:
-
"We argue that one should take into account
safety
, when designing a model to predict future trajectories. Our focus is to generate trajectories that are not justaccurate
but also lead to minimum collisions and thus aresafe
. Safe trajectories are different from trajectories that try to imitate the ground truth, as the latter may lead toimplausible
paths, e.g, pedestrians going through walls." - Hence the trajectory predictions of the Generator are evaluated against multiple criteria:
Accuracy
: The Discriminator checks if the prediction is coherent / plausible with the observation.Safety
: Some Critic predicts the likelihood of a future dynamic and static collision.
- A third loss term is introduced:
-
"Training the generator is harder than training the discriminator, leading to slow convergence or even failure."
- An additional auto-encoding loss to the ground truth is introduced.
- It should encourage the model to avoid trivial solutions and mode collapse, and should increase the diversity of future generated trajectories.
- The term
mode collapse
means that instead of suggesting multiple trajectory candidates (multi-modal
), the model restricts its prediction to only one instance.
-
-
- About
RL
:- The authors mentioned several terms related to
RL
, in particular they try to dray a parallel withInverse RL
:-
"
GANs
resembleIRL
in that the discriminator learns the cost function and the generator represents the policy."
-
- I got the feeling of that idea, but I was honestly did not understand where it was implemented here. In particular no
MDP
formulation is given.
- The authors mentioned several terms related to
- About attention mechanism:
-
"We rely on attention mechanism for spatial relations in the scene to propose a compact representation for modelling interaction among all agents [...] We employ an attention mechanism to prioritize certain elements in the latent state representations."
- The grid-like scene representation is shared by both the Generator and the Critic.
-
- About the baselines:
- I like the "related work" section which shortly introduces the state-of-the-art trajectory prediction models based on deep learning.
SafeCritic
takes inspiration from some of their ideas, such as:- Aggregation of past information about multiple agents in a recurrent model.
- Use of Conditional
GAN
to offer the possibility to also generate novel trajectory given observation via sampling (standardGANs
have not encoder). - Generation of multi-modal future trajectories.
- Incorporation of semantic visual features (extracted by deep networks) combined with an attention mechanism.
SocialGAN
,SocialLSTM
,Car-Net
,SoPhie
andDESIRE
are used as baselines.R2P2
andSocialAttention
are also mentioned.
- I like the "related work" section which shortly introduces the state-of-the-art trajectory prediction models based on deep learning.
"A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving"
- [
2019
] [π] [ πUniversity of Iasi
]
Click to expand
One figure:
Classification of motion models based on three increasingly abstract levels - adapted from (Lefèvre, S., Vasquez. D. & Laugier C. - 2014). Source. |
Authors: Leon, F., & Gavrilescu, M.
- A reference to one white paper: "Safety first for automated driving" 2019 - from Aptiv, Audi, Baidu, BMW, Continental, Daimler, Fiat Chrysler Automobiles, HERE, Infineon, Intel and Volkswagen (alphabetical order). The authors quote some of the good practices about Interpretation and Prediction:
- Predict only a short time into the future (the further the predicted state is in the future, the less likely it is that the prediction is correct).
- Rely on physics where possible (a vehicle driving in front of the automated vehicle will not stop in zero time on its own).
- Consider the compliance of other road users with traffic rules.
- Miscellaneous notes about prediction:
- The authors point the need of high-level reasoning (the more abstract the feature, the more reliable it is long term), mentioning both "affinity" and "attention" mechanisms.
- They also call for jointly addressing vehicle motion modelling and risk estimation (criticality assessment).
- Gaussian Processed is found to be a flexible tool for modelling motion patterns and is compared to Markov Models for prediction.
- In particular, GP regressions have the ability to quantify uncertainty (e.g. occlusion).
-
"CNNs can be superior to LSTMs for temporal modelling since trajectories are continuous in nature, do not have complicated "state", and have high spatial and temporal correlations".
"Deep Predictive Autonomous Driving Using Multi-Agent Joint Trajectory Prediction and Traffic Rules"
Click to expand
One figure:
The framework consists of four modules: encoder module, interaction module, prediction module and control module. Source. |
Authors: Cho, K., Ha, T., Lee, G., & Oh, S.
- One previous work: "Learning-Based Model Predictive Control under Signal Temporal Logic Specifications" by (Cho & Ho, 2018).
- One term: "robustness slackness" for
STL
-formula.- The motivation is to solve dilemma situations (inherent to strict compliance when all rules cannot be satisfied) by disobeying certain rules based on their predicted degree of satisfaction.
- The idea is to filter out non-plausible trajectories in the prediction step to only consider valid prediction candidates during planning.
- The filter considers some "rules" such as
Lane keeping
andCollision avoidance of front vehicle
orSpeed limit
(I did not understand why they are equally considered). - These rules are represented by Signal Temporal Logic (
STL
) formulas.- Note:
STL
is an extension of Linear Temporal Logic (with boolean predicates and discrete-time) with real-time and real-valued constraints.
- Note:
- A metric can be introduced to measure how well a given signal (here, a trajectory candidate) satisfies a
STL
formula.- This is called "robustness slackness" and acts as a margin to satisfaction of
STL
-formula.
- This is called "robustness slackness" and acts as a margin to satisfaction of
- This enables a "control under temporal logic specification" as mentioned by the authors.
- Architecture
- Encoder module: The observed trajectories are fed to some
LSTM
whose internal state is used by the two subsequent modules. - Interaction module: To consider interaction, all
LSTM
states are concatenated (joint state) together with a feature vector of relative distances. In addition, a CVAE is used for multi-modality (several possible trajectories are generated) and capture interactions (I did not fully understand that point), as stated by the authors:-
"The latent variable
z
models inherent structure in the interaction of multiple vehicles, and it also helps to describe underlying ambiguity of future behaviours of other vehicles."
-
- Prediction module: Based on the
LSTM
states, the concatenated vector and the latent variable, both future trajectories and margins to the satisfaction of each rule are predicted. - Control module: An
MPC
optimizes the control of the ego car, deciding which rules should be prioritized based on the two predicted objects (trajectories and robustness slackness).
- Encoder module: The observed trajectories are fed to some
"An Online Evolving Framework for Modeling the Safe Autonomous Vehicle Control System via Online Recognition of Latent Risks"
-
[
2019
] [π] [ πOhio State University
] [ πFord
] -
[
MDP
,action-state transitions matrix
,SUMO
,risk assessment
]
Click to expand
One figure:
Both the state space and the transition model are adapted online, offering two features: prediction about the next state and detection of unknown (i.e. risky ) situations. Source. |
Authors: Han, T., Filev, D., & Ozguner, U.
- Motivation
- "Rule-based and supervised-learning methods cannot recognize unexpected situations so that the AV controller cannot react appropriately under unknown circumstances."
- Based on their previous work on RL βHighway Traffic Modeling and Decision Making for Autonomous Vehicle Using Reinforcement Learningβ by (You, Lu, Filev, & Tsiotras, 2018).
- Main ideas: Both the state space and the transition model (here discrete state space so transition matrices) of an MDP are adapted online.
- I understand it as trying to learn the transition model (experience is generated using
SUMO
), hence to some extent going toward model-based RL. - The motivation is to assist any AV control framework with a so-called "evolving Finite State Machine" (
e
-FSM
).- By identifying state-transitions precisely, the future states can be predicted.
- By determining states uniquely (using online-clustering methods) and recognizing the state consistently (expressed by a probability distribution), initially unexpected dangerous situations can be detected.
- It reminds some ideas about risk assessment discussed during IV19: the discrepancy between expected outcome and observed outcome is used to quantify risk, i.e. the surprise or misinterpretation of the current situation).
- I understand it as trying to learn the transition model (experience is generated using
- Some concerns:
- "The dimension of transition matrices should be expanded to represent state-transitions between all existing states"
- What when the scenario gets more complex than the presented "simple car-following" and that the state space (treated as discrete) becomes huge?
- In addition, "the total number of transition matrices is identical to the total number of actions".
- Alone for the simple example, the acceleration command was sampled into
17
bins. Continuous action spaces are not an option.
- Alone for the simple example, the acceleration command was sampled into
- "The dimension of transition matrices should be expanded to represent state-transitions between all existing states"
"A Driving Intention Prediction Method Based on Hidden Markov Model for Autonomous Driving"
-
[
2019
] [π] [ πIEEE
] -
[
HMM
,Baum-Welch algorithm
,forward algorithm
]
Click to expand
One figure:
Source. |
Authors: Liu, S., Zheng, K., Member, S., Zhao, L., & Fan, P.
- One term: "mobility feature matrix"
- The recorded data (e.g. absolute positions, timestamps ...) are processed to form the mobility feature matrix (e.g. speed, relative position, lateral gap in lane ...).
- Its size is
T Γ L Γ N
:T
time steps,L
vehicles,N
types of mobility features. - In the discrete characterization, this matrix is then turned into a set of observations using K-means clustering.
- In the continuous case, mobility features are modelled as Gaussian mixture models (GMMs).
- This work implements HMM concepts presented in my project Educational application of Hidden Markov Model to Autonomous Driving.
"Online Risk-Bounded Motion Planning for Autonomous Vehicles in Dynamic Environments"
-
[
2019
] [π] [ πMIT
] [ πToyota
] -
[
intention-aware planning
,manoeuvre-based motion prediction
,POMDP
,probabilistic risk assessment
,CARLA
]
Click to expand
One figure:
Source. |
Authors: Huang, X., Hong, S., Hofmann, A., & Williams, B.
- One term: "Probabilistic Flow Tubes" (
PFT
)- A motion representation used in the "Motion Model Generator".
- Instead of using hand-crafted rules for the transition model, the idea is to learns human behaviours from demonstration.
- The inferred models are encoded with PFTs and are used to generate probabilistic predictions for both manoeuvre (long-term reasoning) and motion of the other vehicles.
- The advantage of belief-based probabilistic planning is that it can avoid over-conservative behaviours while offering probabilistic safety guarantees.
- Another term: "Risk-bounded POMDP Planner"
- The uncertainty in the intention estimation is then propagated to the decision module.
- Some notion of risk, defined as the probability of collision, is evaluated and considered when taking actions, leading to the introduction of a "chance-constrained POMDP" (
CC-POMDP
). - The online solver uses a heuristic-search algorithm, Risk-Bounded AO* (
RAO*
), takes advantage of the risk estimation to prune the over-risky branches that violate the risk constraints and eventually outputs a plan with a guarantee over the probability of success.
- One quote (this could apply to many other works):
"One possible future work is to test our work in real systems".
"Towards Human-Like Prediction and Decision-Making for Automated Vehicles in Highway Scenarios"
-
[
planning-based motion prediction
,manoeuvre-based motion prediction
]
Click to expand
Author: Sierra Gonzalez, D.
-
Prediction techniques are often classified into three types:
physics-based
manoeuvre-based
(andgoal-based
).interaction-aware
-
As I understood, the main idea here is to combine prediction techniques (and their advantages).
- The driver-models (i.e. the reward functions previously learnt with IRL) can be used to identify the most likely, risk-aversive, anticipatory manoeuvres. This is called the
model-based
prediction by the author since it relies on one model.- But relying only on driver models to predict the behaviour of surrounding traffic might fail to predict dangerous manoeuvres.
- As stated, "the model-based method is not a reliable alternative for the short-term estimation of behaviour, since it cannot predict dangerous actions that deviate from what is encoded in the model".
- One solution is to add a term that represents how the observed movement of the target matches a given maneuver.
- In other words, to consider the noisy observation of the dynamics of the targets and include these so-called
dynamic evidence
into the prediction.
- The driver-models (i.e. the reward functions previously learnt with IRL) can be used to identify the most likely, risk-aversive, anticipatory manoeuvres. This is called the
-
Usage:
- The resulting approach is used in the probabilistic filtering framework to update the belief in the POMDP and in its rollout (to bias the construction of the history tree towards likely situations given the state and intention estimations of the surrounding vehicles).
- It improves the inference of manoeuvres, reducing rate of false positives in the detection of
lane change
manoeuvres and enables the exploration of situations in which the surrounding vehicles behave dangerously (not possible if relying on safe generative models such asIDM
).
-
One quote about this combination:
"This model mimics the reasoning process of human drivers: they can guess what a given vehicle is likely to do given the situation (the model-based prediction), but they closely monitor its dynamics to detect deviations from the expected behaviour".
-
One idea: use this combination for risk assessment.
- As stated, "if the intended and expected maneuver of a vehicle do not match, the situation is classified as dangerous and an alert is triggered".
- This is an important concept of risk assessment I could identify at IV19: a situation is dangerous if there is a discrepancy between what is expected (given the context) and what is observed.
-
One term: "Interacting Multiple Model" (
IMM
), used as baseline in the comparison.- The idea is to consider a group of motion models (e.g.
lane keeping with CV
,lane change with CV
) and continuously estimate which of them captures more accurately the dynamics exhibited by the target. - The final predictions are produced as a weighted combination of the individual predictions of each filter.
IMM
belongs to the physics-based predictions approaches and could be extended formanoeuvre inference
(called dynamics matching). It is often used to maintain the beliefs and guide the observation sampling in POMDP.- But the issue is that IMM completely disregards the interactions between vehicles.
- The idea is to consider a group of motion models (e.g.
"Decision making in dynamic and interactive environments based on cognitive hierarchy theory: Formulation, solution, and application to autonomous driving"
-
[
2019
] [π] [ πUniversity of Michigan
] -
[
level-k game theory
,cognitive hierarchy theory
,interaction modelling
,interaction-aware decision making
]
Click to expand
Authors: Li, S., Li, N., Girard, A., & Kolmanovsky, I.
-
One concept:
cognitive hierarchy
.- Other drivers are assumed to follow some "cognitive behavioural models", parametrized with a so called "cognitive level"
Ο
. - The goal is to obtain and maintain belief about
Ο
based on observation in order to optimally respond (using anMPC
). - Three levels are considered:
- level-
0
: driver that treats other vehicles on road as stationary obstacles. - level-
1
: cautious/conservative driver. - level-
2
: aggressive driver.
- level-
- Other drivers are assumed to follow some "cognitive behavioural models", parametrized with a so called "cognitive level"
-
One quote about the "cognitive level" of human drivers:
"Humans are most commonly level-1 and level-2 reasoners".
Related works:
-
Li, S., Li, N., Girard, A. & Kolmanovsky, I. [2019]. "Decision making in dynamic and interactive environments based on cognitive hierarchy theory, Bayesian inference, and predictive control" [pdf]
-
Li, N., Oyler, D., Zhang, M., Yildiz, Y., Kolmanovsky, I., & Girard, A. [2016]. "Game-theoretic modeling of driver and vehicle interactions for verification and validation of autonomous vehicle control systems" [pdf]
-
"If a driver assumes that the other drivers are level-
1
and takes an action accordingly, this driver is a level-2
driver". - Use RL with hierarchical assignment to learn the policy:
- First, the
Ο-0
(for level-0
) is learnt for the ego-agent. - Then
Ο-1
with all the other participants followingΟ-0
. - Then
Ο-2
...
- First, the
- Action masking: "If a car in the left lane is in a parallel position, the controlled car cannot change lane to the left".
- "The use of these hard constrains eliminates the clearly undesirable behaviours better than through penalizing them in the reward function, and also increases the learning speed during training"
-
-
Ren, Y., Elliott, S., Wang, Y., Yang, Y., & Zhang, W. [2019]. "How Shall I Driveβ―? Interaction Modeling and Motion Planning towards Empathetic and Socially-Graceful Driving" [pdf] [code]
Source. |
Source. |
"Trajectory Optimization and Situational Analysis Framework for Autonomous Overtaking with Visibility Maximization"
- [
2019
] [π] [ποΈ] [ πNational University of Singapore, Delft University, MIT
] - [
FSM
,occlusion
,partial observability
]
Click to expand
Left: previous work Source. Right: The BP FSM consists in 5 states and 11 transitions. Each transition from one state to the other is triggered by specific alphabet unique to the state. For instance, 1 is Obstacle to be overtaken in ego lane detected . Together with the MPC set of parameters, a guidance path is passed to the trajectory optimizer. Source. |
Authors: Andersen, H., Alonso-mora, J., Eng, Y. H., Rus, D., & Ang Jr, M. H.
- Main motivation:
- Deal with occlusions, i.e. partial observability.
- Use case: a car is illegally parked on the vehicleβs ego lane. It may fully occlude the visibility. But has to be overtaken.
- One related works:
- "Trajectory Optimization for Autonomous Overtaking with Visibility Maximization" - (Andersen et al., 2017)
- [ποΈ].
- [ποΈ].
- [ποΈ].
- About the hierarchical structure.
1-
A high-level behaviour planner (BP
).- It is structured as a deterministic finite state machine (
FSM
). - States include:
Follow ego-lane
Visibility Maximization
Overtake
Merge back
Wait
- Transition are based on some deterministic
risk assessment
.- The authors argue that the deterministic methods (e.g. formal verification of trajectory using
reachability analysis
) are simpler and computationally more efficient than probabilistic versions, while being very adapted for this information maximization: -
This is due to the fact that the designed behaviour planner explicitly breaks the traffic rule in order to progress along the vehicleβs course.
- The authors argue that the deterministic methods (e.g. formal verification of trajectory using
- It is structured as a deterministic finite state machine (
- Interface
1-
>2-
:- Each state correspond to specific set of parameters that is used in the trajectory optimizer.
-
"In case of
Overtake
, a suggested guidance path is given to both theMPC
and `backup trajectory generator".
2-
A trajectory optimizer.- The problem is formulated as receding horizon planner and the task is to solve, in real-time, the non-linear constrained optimization.
- Cost include
guidance path deviation
,progress
,speed deviation
,size of blind spot
(visible area) andcontrol inputs
. - Constraints include, among other,
obstacle avoidance
. - The prediction horizon of this
MPC
is5s
.
- Cost include
- Again (I really like this idea),
MPC
parameters are set by theBP
.- For instance, the cost for
path deviation
is high forFollow ego-lane
, while it can be reduced forVisibility Maximization
. -
"Increasing the visibility maximization cost resulted in the vehicle deviating from the path earlier and more abrupt, leading to frequent wait or merge back cases when an oncoming car comes into the vehicleβs sensor range. Reducing visibility maximization resulted in later and less abrupt deviation, leading to overtaking trajectories that are too late to be aborted. We tune the costs for a good trade-off in performance."
- Hence, depending on the state, the task might be to maximize the amount of information that the autonomous vehicle gains along its trajectory.
- For instance, the cost for
- The problem is formulated as receding horizon planner and the task is to solve, in real-time, the non-linear constrained optimization.
-
"Our method considers visibility as a part of both
decision-making
andtrajectory generation
".
"Jointly Learnable Behavior and Trajectory Planning for Self-Driving Vehicles"
- [
2019
] [π] [ πUber
] - [
max-margin
]
Click to expand
Source. |
Authors: Sadat, A., Ren, M., Pokrovsky, A., Lin, Y., Yumer, E., & Urtasun, R.
- Main motivation:
- Design a decision module where both the behavioural planner and the trajectory optimizer share the same objective (i.e. cost function).
- Therefore "joint".
-
"[In approaches not-joint approaches] the final trajectory outputted by the trajectory planner might differ significantly from the one generated by the behavior planner, as they do not share the same objective".
- Requirements:
1-
Avoid time-consuming, error-prone, and iterative hand-tuning of cost parameters.- E.g. Learning-based approaches (
BC
).
- E.g. Learning-based approaches (
2-
Offer interpretability about the costs jointly imposed on these modules.- E.g. Traditional modular
2
-stage approaches.
- E.g. Traditional modular
- About the structure:
- The driving scene is described in
W
(desired route
,ego-state
,map
, anddetected objects
). ProbablyW
for "World"? - The behavioural planner (
BP
) decides two things based onW
:1-
An high-level behaviourb
.- The path to converge to based on one chosen manoeuvre:
keep-lane
,left-lane-change
, orright-lane-change
. - The
left
andright
lane boundaries. - The obstacle
side assignment
: whether an obstacle should stay in thefront
,back
,left
, orright
to the ego-car.
- The path to converge to based on one chosen manoeuvre:
2-
A coarse-level trajectoryΟ
.- The loss has also a regularization term.
- This decision is "simply" the
argmin
of the shared cost-function, obtained by sampling+selecting the best.
- The "trajectory optimizer" refines
Ο
based on the constraints imposed byb
.- For instance an overlap cost will be incurred if the
side assignment
ofb
is violated.
- For instance an overlap cost will be incurred if the
- A cost function parametrized by
w
assesses the quality of the selected <b
,Ο
> pair:cost
=w^T
.sub-costs-vec
(Ο
,b
,W
).- Sub-costs relate to safety, comfort, feasibility, mission completion, and traffic rules.
- The driving scene is described in
- Why "learnable"?
- Because the weight vector
w
that captures the importance of each sub-cost is learnt based on human demonstrations.-
"Our planner can be trained jointly end-to-end without requiring manual tuning of the costs functions".
-
- They are two losses for that objective:
1-
Imitation loss (withMSE
).- It applies on the <
b
,Ο
> produced by theBP
.
- It applies on the <
2-
Max-margin loss to penalize trajectories that have small cost and are different from the human driving trajectory.- It applies on the <
Ο
> produced by the trajectory optimizer. -
"This encourages the human driving trajectory to have smaller cost than other trajectories".
- It reminds me the
max-margin
method inIRL
where the weights of the reward function should make the expert demonstration better than any other policy candidate.
- It applies on the <
- Because the weight vector
"Liability, Ethics, and Culture-Aware Behavior Specification using Rulebooks"
-
[
2019
] [π] [] [ποΈ] [ποΈ] [ π
ETH Zurich
] [ πnuTonomy
,Aptiv
] -
[
sampling-based planning
,safety validation
,reward function
,RSS
]
Click to expand
Some figures:
Defining the rulebook . Source. |
The rulebook is associated to an operator =< to prioritize between rules. Source. |
The rulebook serves for deciding which trajectory to take and can be adapted using a series of operations. Source. |
Authors: Censi, A., Slutsky, K., Wongpiromsarn, T., Yershov, D., Pendleton, S., Fu, J., & Frazzoli, E.
-
Allegedly how nuTonomy (an Aptiv company) cars work.
-
One main concept: "rulebook".
- It contains multiple
rules
, that specify the desired behaviour of the self-driving cars. - A rule is simply a scoring function, or βviolation metricβ, on the realizations (= trajectories).
- The degree of violation acts like some penalty term: here some examples of evaluation of a realization
x
evaluated by a ruler
:- For speed limit:
r
(x
) = interval for which the car was above45 km/h
. - For minimizing harm:
r
(x
) = kinetic energy transferred to human bodies.
- For speed limit:
- Based on Use as a comparison operator to rank candidate trajectories.
- It contains multiple
-
One idea: Hierarchy of rules.
- With many rules being defined, it may be impossible to find a realization (e.g. trajectory) that satisfies all.
- But even in critical situation, the algorithm must make a choice - the least catastrophic option (hence no concept of infeasibility.)
- To deal with this concept of "Unfeasibility", priorities between conflicting rules which are therefore hierarchically ordered.
- Hence a rulebook
R
comes with some operator<
: <R
,<
>. - This leads to some concepts:
- Safety vs. infractions.
- Ex.: a rule "not to collide with other objects" will have a higher priority than the rule "not crossing the double line".
- Liability-aware specification.
- Ex.: (edge-case): Instruct the agent to collide with the object on its lane, rather than collide with the object on the opposite lane, since changing lane will provoke an accident for which it would be at fault.
- This is close to the RSS ("responsibility-sensitive safety" model) of Mobileye.
- Hierarchy between rules:
- Top: Guarantee safety of humans.
- This is written analytically (e.g. a precise expression for the kinetic energy to minimize harm to people).
- Bottom: Comfort constraints and progress goals.
- Can be learnt based on observed behaviour (and also tend to be platform- and implementation- specific).
- Middle: All the other priorities among rule groups
- There are somehow open for discussion.
- Top: Guarantee safety of humans.
-
How to build a rulebook:
- Rules can be defined analytically (e.g.
LTL
formalism) or learnt from data (for non-safety-critical rules). - Violation functions can be learned from data (e.g.
IRL
). - Priorities between rules can also be learnt.
- Rules can be defined analytically (e.g.
-
One idea: manipulation of rulebooks.
- Regulations and cultures differ depending on the country and the state.
- A rulebook <
R
,<
> can easily be adapted using three operations (priority refinement
,rule augmentation
,rule aggregation
).
-
Related work: Several topics raised in this paper reminds me subjects addressed in Emilio Frazzoli, CTO, nuTonomy - 09.03.2018
- 1- Decision making with FSM:
- Too complex to code. Easy to make mistake. Difficult to adjust. Impossible to debug (:cry:).
- 2- Decision making with E2E learning:
- Appealing since there are too many possible scenarios.
- But how to prove that and justify it to the authorities?
- One solution is to revert such imitation strategy: start by defining the rules.
- 3- Decision making "cost-function-based" methods
- 3-1-
RL
/MCTS
: not addressed here. - 3-2- Rule-based (not the
if
-else
-then
logic but rather traffic/behaviour rules).
- 3-1-
- First note:
- Number of rules: small (
15
are enough for level-4
). - Number of possible scenarios: huge (combinational).
- Number of rules: small (
- Second note:
- Driving baheviours are hard to code.
- Driving baheviours are hard to learn.
- But driving baheviours are easy to assess.
- Strategy:
- 1- Generate candidate trajectories
- Not only in time and space.
- Also in term of semantic (logical trajectories in Kripke structure).
- 2- Check if they satisfy the constraints and pick the best.
- This involves linear operations.
- 1- Generate candidate trajectories
- Conclusion:
-
"Rules and rules priorities, especially those that concern safety and liability, must be part of nation-wide regulations to be developed after an informed public discourse; it should not be up to engineers to choose these important aspects."
- This reminds me the discussion about social-acceptance I had at IV19.^
- As E. Frazzoli concluded during his talk, the remaining question is:
- "We do not know how we want human-driven vehicle to behave?"
- Once we have the answer, that is easy.
-
- 1- Decision making with FSM:
Some figures from this related presentation:
Candidate trajectories are not just spatio-temporal but also semantic. Source. |
Define priorities between rules, as Asimov did for his laws. Source. |
As raised here by the main author of the paper, I am still wondering how the presented framework deals with the different sources of uncertainties. Source. |
"Provably Safe and Smooth Lane Changes in Mixed Traffic"
Click to expand
Some figures:
The first safe? check might lead to conservative behaviours (huge gaps would be needed for safe lane changes). Hence it is relaxed with some Probably Safe? condition. Source. |
Source. |
Formulation by Pek, Zahn, & Althoff, 2017. Source. |
Authors: Naumann, M., KΓΆnigshof, H., & Stiller, C.
-
Main ideas:
- The notion of safety is based on the responsibility sensitive safety (
RSS
) definition.- As stated by the authors, "A
safe
lane change is guaranteed not tocause
a collision according to the previously defined rules, while a single vehicle cannot ensure that it will never be involved in a collision."
- As stated by the authors, "A
- Use set-based reachability analysis to prove the "RSS-safety" of lane change manoeuvre based on gap evaluation.
- In other words, it is the responsibility of the ego vehicle to maintain safe distances during the lane change manoeuvre.
- The notion of safety is based on the responsibility sensitive safety (
-
Related works: A couple of safe distances are defined, building on
RSS
principles (after IV19, I tried to summarize some of the RSS concepts here).- "Verifying the Safety of Lane Change Maneuvers of Self-driving Vehicles Based on Formalized Traffic Rules", (Pek, Zahn, & Althoff, 2017)
"Decision-Making Framework for Autonomous Driving at Road Intersections: Safeguarding Against Collision, Overly Conservative Behavior, and Violation Vehicles"
-
[
2018
] [π] [ποΈ] [ πDaejeon Research Institute, South Korea
] -
[
probabilistic risk assessment
,rule-based probabilistic decision making
]
Click to expand
One figure:
Source. |
Author: Noh, S.
- Many ML-based works criticize rule-based approaches (over-conservative, no generalization capability and painful parameter tuning).
- True, the presented framework contains many parameters whose tuning may be tedious.
- But this approach just works! At least they go out of the simulator and show some experiments on a real car.
- I really like their video, especially the multiple camera views together with the
RViz
representation. - It can be seen that probabilistic reasoning and uncertainty-aware decision making are essential for robustness.
- One term: "Time-to-Enter" (tte).
- It represents the time it takes a relevant vehicle to reach the potential collision area (CA), from its current position at its current speed.
- To deal with uncertainty in the measurements, a variant of this heuristic is coupled with a Bayesian network for probabilistic threat-assessment.
- One Q&A: What is the difference between situation awareness and situation assessment?
- In situation awareness, all possible routes are considered for the detected vehicles using a map. The vehicles whose potential route intersect with the ego-path are classified as relevant vehicles.
- In situation assessment, a threat level in {
Dangerous
,Attentive
,Safe
} is inferred for each relevant vehicle.
- One quote:
"The existing literature mostly focuses on motion prediction, threat assessment, or decision-making problems, but not all three in combination."
"Review, Analyse, and Design a Comprehensive Deep Reinforcement Learning Framework"
-
[
2020
] [π] [ πDeakin University
,Notheastern University
] []
-
[
RL software framework
,FruitAPI
]
Click to expand
The API is based on three core concepts: policy network , network configuration , and learner . Besides, the monitor is used to manage multiple learners (if multi-threading is used) and collect any data from the learners during training while factory components can offer higher abstraction. No need to reinvent the wheel: Plugins are gateways that extract learners from other libraries such as PyTorch , Tensorflow , Keras and plug them into our proposed framework. Source. |
Source. |
Source. |
Authors: Nguyen, N. D., Nguyen, T. T., Nguyen, H., & Nahavandi, S.
FruitAPI
: Yet anotherRL
framework?- Whatever! Some useful reminders about
RL
concepts and good insights about how software can be structured forRL
applications.
- Whatever! Some useful reminders about
- Motivations:
-
"Our ultimate goal is to build an educational software platform for deep
RL
."- For instance, they plan to develop a GUI application that can configure the neural network, modify the learner, and visualize the
RL
workflow.
- For instance, they plan to develop a GUI application that can configure the neural network, modify the learner, and visualize the
1-
The API should work with any deep learning libraries such asPyTorch
,Tensorflow
,Keras
, etc.-
"Instead of implementing a lot of deep
RL
algorithms, we provide a flexible way to integrate existing deep RL libraries by introducingplugins
. Plugins extract learners from other deep RL libraries and plug intoFruitAPI
."
-
2-
Support different disciplines inRL
such as multiple objectives, multiple agents, and human-agent interaction.
-
- Mentioned frameworks:
TensorForce
.-
"It is an ambitious project that targets both industrial applications and academic research. The library has the best modular architecture we have reviewed so far. However, the framework has a deep software stack (βpyramidβ model) that includes many abstraction layers. This hinders novice readers from prototyping a new deep RL method."
-
OpenAI Baselines
.RLLib
. For scalableRL
. Built on top ofray
.RLLab
. Nowgarage
. A toolkit for reproducible reinforcement learning research.Keras-RL
.Chainer
.
"Learning hierarchical behavior and motion planning for autonomous driving"
-
[
2020
] [π] [ πZhejiang University
,Arizona State University
]
Click to expand
To achieve transfer, the authors develop a sharable representation for input sensory data across simulation platforms and real-world environment. Left: static. Right: dynamic information. Source. |
Left - The behaviour layer makes a decision based on the current observation, e.g. lane change , while the motion planning layer yields the trajectory to complete this decision. More precisely, the output of the PPO actor-network is the distribution over the behaviour s. The motion planner is conditioned over this high-level decision. If you are also wondering: traffic light state and speed limit are represented probably since they are part of the road profile (static information) - surprising to limit a 3 -lanes highway at 30mps ? Source. |
Authors: Wang, J., Wang, Y., Zhang, D., Yang, Y., & Xiong, R.
-
Motivations:
1-
Jointly optimize both thebehaviour
andmotion
planning modules. I.e. do not treat them independently.- Methods that use
RL
to decide low-level commands such asthrottle
andsteering
often lead to weak tactical decision-making. -
"To demonstrate a similar level of driving as the conventional modular pipeline, a SINGLE
RL
network has to model theperception
,behavior
andmotion planner
altogether. Given the low dimensional control commands as supervision inIL
or handcrafted rewards inRL
, it is likely that the network only learns the route tracking with limited tactical decision-making ability."
- Methods that use
2-
Apply learning-based methods (model-freeRL
andIL
) to derive the decision policies.3-
Address the issues of sampling complexity and sparse reward in model freeRL
.-
"Reward engineering for
RL
is challenging, especially for long-horizon tasks."
-
4-
Do not use any "explicit handcrafted representation" for traffic situation.-
"[not clear to me] We follow the idea of
HBMP
in order to learn a representation of traffic situation implicitly and improve the tactical driving in learning-based solution." -
"[is it what they mean by 'not explicit handcrafted representation'?] Note that no cyclist recognition is utilized in the whole pipeline, but only occupied map directly extracted from lidar data."
-
-
One concept: hierarchical behaviour and motion planning (
HBMP
).- The
behaviour
layer makes a decision based on the current observation, e.g.lane change
, while the **motion planning
layer` yields the trajectory to complete this decision. -
"our idea is to introduce the hierarchical modeling of
behavior
andmotion
in the conventional modular pipeline to the learning-based solution."
- The
-
About the size of search spaces in
hierarchical RL
:- When coupling the two
action
spaces (behaviour
andmotion
), the search complexity becomes a challenge, especially in the long-horizon driving task. TheHRL
is therefore re-formulated:-
"[solution:]
RL
is applied to solve an equivalentbehavior
learning problem whosereward
s are assigned by thecost
from themotion planner
." -
"The key insight is assigning the optimal costs from low-level motion planner, i.e. a classical sampling-based planner, as
reward
s for learning behavior." - The ablation study shows the importance of
hierarchical
modelling to reduce theaction
space.
-
- When coupling the two
-
Improving the sampling complexity and sparse reward issues:
1-
The above-mentionned Hierarchy: Implement a temporal abstraction foraction
s.-
"To reduce the complexity, behaviour planning is introduced to restrict the search space."
-
"As the high-level
behaviour
bk
is a tactical decision for a longer horizon thancontrol
command, eachbk
is fixed for a time interval". - The low-level policy is conditioned on
bk
:ut
=Οbu
(xt
,bk
). - The behaviour planner choses a
behaviour
abk
in {speed up
,speed down
,change left
,change right
,keep lane
}. - A Behaviour-Conditioned motion planner produces a trajectory for this behaviour, implemented by a subsequent
PID
controller.- Here it is sampling-based, i.e. optimal without training.
- The
cost
includes terms aboutspeed deviation
,lateral deviation
to some reference path as well asdistances to obstacles
.
-
2-
Good policy inititialization.-
"We initialize the
RL
policy network trained onCARLA
usingIL
-based policy (DAgger
) trained onSUMO
by presenting a sharable sensory data representation, further acceleratingRL
training". - The key ingredient for the transfer between the two simulators is the
state
representation, among other the deformed grid mapMt
.
-
-
How to deal with sparse rewards?
-
"Obviously, such
reward
design [sparse rewards] gives extremely sparse guidance only in the final stage, thus lots of trials may have almost the same rewards, causing the inefficiency ofRL
solver." - The above-mentioned Hierarchy: add
dense reward
s. - The low-level motion planner offers denser rewards than relying on the far-away
reward
oftermination state
s.
-
-
Generalization to real-world.
- Again, the sharable representation for input sensory data across simulation platforms and real-world environment is key for transfer.
- It consist of
1-
thevehicle-centric grid map
for dynamic information-
"Note that no detection and recognition is utilized to parse the sensory data but only geometric transform calculation."
-
2-
theroad profile
that contains static information:- The existence and direction of left and right lanes.
- The next traffic lights and the distance to the stop line of intersection.
- The current speed limit.
"Safe Reinforcement Learning for Autonomous Vehicles through Parallel Constrained Policy Optimization"
-
[
2020
] [π] [ πTsinghua University
,University of Michigan
] [ πToyota
] -
[
safe RL
]
Click to expand
Risk-Actor-Critic: Similar to the critic which estimates the expectation of the value function (cumulated reward s), the risk network estimates the sum of coming risk signals, i.e. signals received when moving to unsafe states. In this case PCPO considers this risk function and bounds the expected risk within predefined hard constraints. Source. |
Note that what is wanted is approximate safety during the TRAINING phase - not just the testing one. PCPO is compared to CPO and PPO . All three methods can eventually learn a safe lane-keeping policy, however, the vehicle deviates from the lane multiple times during the learning process of the PPO . Source. |
Authors: Wen, L., Duan, J., Eben, Li, S., Xu, S., & Peng, H.
-
Motivations:
1-
Ensure the policy is safe in the learning process.-
"Existing RL algorithms are rarely applied to real vehicles for two predominant problems: (1) behaviors are unexplainable, (2) they cannot guarantee safety under new scenarios."
-
"Safety is the most basic requirement for autonomous vehicles, so a training process that only looks at
reward
, and not potential risk, is not acceptable."
-
2-
Improve the convergence speed.
-
About
safe RL
:-
"Observing safety constraints and getting high rewards are adversarial."
safe RL
could be divided into being strictly safe and approximately safe.- Two approaches:
1-
Modifying the optimization criterion.1-a.
maxi-min
: consider a policy to be optimal if it has the maximum worst-casereturn
.1-b.
risk-sensitive
: includes the notion of risk and return variance in the long termreward
maximization objective, e.g. probability of entering an error state.1-c.
constrained
: ensure that the expectation of return is subject to one or more constraints, e.g. the variance of thereturn
must not exceed a given threshold.
2-
Modifying the exploration process.2-a.
Incorporating external knowledge.- E.g. post-posed shielding: the shield monitors the agent's action and corrects them if the chosen action causes a violation.
2-b.
Risk directed exploration.- Encourage the agent to explore controllable regions of environment by introducing
risk
metric as an exploration bonus.
- Encourage the agent to explore controllable regions of environment by introducing
-
-
About Parallel Constrained Policy Optimization (
PCPO
). Three ingredients:1-
Constrained policy optimization (CPO
).-
"Besides the
reward
signalr
, the vehicle will also observe a scalar risk signalΛr
at each step." - It is designed by human experts and is usually assigned a large value when the vehicle moves into an unsafe state.
- Similar to the value function
VΟ
that estimate the discounted cumulated rewards, the risk function JΛ(Ο) of policyΟ
estimates the sum of coming risks.- Similar to the
critic
net, arisk
net is introduced.
- Similar to the
- The idea of
CPO
:-
"Bound this expected risk within predefined hard constraints."
-
-
2-
Trust-region constraint.- Monotonic improvement condition can be guaranteed only when the policy changes are not very large.
- The author go through the derivation of the surrogate objective function with importance sampling.
3-
Synchronous parallel agents learning.- Multiple agents (
4
cars in the experiment) are exploring different state spaces in parallel. - It helps to reduce the correlation and increase the coverage of all collected samples, which increases the possibility of finding feasible states - i.e. better exploration.
- Multiple agents (
-
About the
MDP
formulation for the intersection-crossing problem:S
= {l1
,v1
,l2
,v2
,l3
,v3
}, wherel
denotes the distance of the vehicle to the middle point of its track, andv β [6, 14] (m/s)
denotes thevelocity
."A
= {a1
,a2
,a3
}, wherea β [β3, 3] (m/s2)
.-
"We adopt a stochastic policy, the output of which is the
mean
andstandard deviation
of the Gaussian distribution."
-
reward
model:- The agents receive a reward of
10
for every passing vehicle, as well as a reward of-1
for every time step and an additional reward of10
for terminal success. - The agents are given a risk of
50
when a collision occurs.
- The agents receive a reward of
- That means "collision" is not part of the
reward
. Rather in therisk
.- Ok, but how can it be ensured to be safe during training if it has never encountered any collision and does not event know what it means (no "model")? This is not clear to me.
"Deep Reinforcement Learning for Autonomous Driving: A Survey"
-
[
2020
] [π] [ πENSTA ParisTech
,National University of Ireland
] [ πNavya
,Valeo
] -
[
review
]
Click to expand
Source. |
Authors: Ravi Kiran, B., Sobh, I., Talpaert, V., Mannion, P., Sallab, A. A. Al, Yogamani, S., & PΓ©rez, P.
- Not too much to report. A rich literature overview and some useful reminders about general
RL
concepts.- Considering the support of serious industrial companies (
Navya
,Valeo
), I was surprised not to see any section about "RL
that works in reality".
- Considering the support of serious industrial companies (
- Miscellaneous notes about decision-making methods for
AD
:- One term: "holonomic" (I often met this term without any definition)
-
"A robotic agent capable of controlling
6
-degrees of freedom (DOF
) is said to be holonomic, while an agent with fewer controllableDOF
s than its totalDOF
is said to be non-holonomic." - Cars are non-holonomic (addressed by
RRT
variants but not byA*
orDjisktra
ones).
-
- About
optimal control
(I would have mentionedplanning
rather thanmodel-based RL
):-
"
Optimal control
andRL
are intimately related, whereoptimal control
can be viewed as a model-basedRL
problem where the dynamics of the vehicle/environment are modeled by well defined differential equations."
-
- About
GAIL
to generate a policy, as an alternative toapprenticeship learning
(IRL
+RL
):-
"The theory behind
GAIL
is an equation simplification: qualitatively, ifIRL
is going from demonstrations to a cost function andRL
from a cost function to a policy, then we should altogether be able to go from demonstration to policy in a single equation while avoiding the cost function estimation."
-
- Some mentioned challenges:
- Validation and safety.
- Sample efficiency and training stability in general.
MDP
definition, e.g. space discretization or not, level of abstraction.- Simulation-reality gap (domain transfer), mentioning domain adaptation solutions:
feature
-levelpixel
-levelreal-to-sim
: adapting the real camera streams to the synthetic modality, so as to map the unfamiliar or unseen features of real images back into the simulated style.
- Reward shaping. Mentioning intrinsic rewards:
-
"In the absence of an explicit reward shaping and expert demonstrations, agents can use intrinsic rewards or intrinsic motivation."
curiosity
: the error in an agentβs ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model.
-
- Reproducibility. Multiple frameworks are mentioned:
- OpenAI Baselines.
- TF-Agents.
- Coach by Intel AI Lab.
- rlpyt by bair.berkeley.
- bsuite by DeepMind.
- One term: "holonomic" (I often met this term without any definition)
"Interpretable End-to-end Urban Autonomous Driving with Latent Deep Reinforcement Learning"
-
[
2020
] [π] [ πUC Berkeley
,Tsinghua University
] -
[
probabilistic graphical models
,MaxEnt RL
,mid-to-end
,CARLA
]
Click to expand
The RL problem is formulated with a probabilistic graphical model (PGM ). zt represents for the latent state, i.e. the hidden state. xt is the observation sensor inputs. at is the action. Ot denotes the optimality variable. The authors introduce a mask variable mt and learn its probability of emission conditioned on z . Source. |
The mask is a semantic representation of the scene, helpful for interpretation of the perception part (not directly applicable to understand the decision ). The learnt transition function (from z to z' conditioned on a ) can be used for prediction. Source. |
Authors: Chen, J., Li, S. E., & Tomizuka, M.
-
Motivations:
1-
Offer more interpretability to end-to-end model-freeRL
methods.2-
Reduce the sample complexity of model-freeRL
(actually not really addressed).
-
Properties of the environment:
1-
High-dimensional observations:camera
frame andlidar
point cloud.2-
Time-sequence probabilistic dynamics.3-
Partial observability.
-
One first idea: formulate the
RL
problem as a probabilistic graphical model (PGM
):- A binary random variable (
Ot
), called optimality variable, is introduced to indicate whether the agent is acting optimally at time stept
.- It turn out that its conditional probability is exponentially proportional to the one-step reward:
p
(Ot=1
|zt
,at
) =exp
(r
(zt
,at
)) - The stochastic exploration strategy has therefore the form of a
Boltzmann
-like distribution, with theQ
-function acting as the negative energy.
- It turn out that its conditional probability is exponentially proportional to the one-step reward:
- Then, the task is to make sure that the most probable trajectory corresponds to the trajectory from the optimal policy.
-
"
MaxEnt RL
can be interpreted as learning aPGM
."MaxEnt RL
is then used to maximize the likelihood of optimality variables in thePGM
.- More precisely, the original log-likelihood is maximized by maximizing the
ELBO
(c.f.variational methods
).
- A binary random variable (
-
About
MaxEnt RL
:- The standard
RL
is modified by adding an entropy regularization termH
(Ο
) =βlog
(Ο
) to the reward. MaxEnt RL
has a stochastic policy by default, thus the policy itself includes the exploration strategy.- Intuitively, the goal is to learn a policy that acts as randomly as possible (encouraging uniform action probability) while is still aiming at succeeding at the task.
- About soft actor critic (
SAC
):- This is an example of
MaxEnt RL
algorithm: it incorporates the entropy measure of the policy into the reward to encourage exploration. - Why "soft"?
- For small values of
Q
, the approximationV
=log
(exp
(Q
)) ~=max
(Q
) is loose and the maximum is said soft, leading to an optimisticQ
-function.
- For small values of
- This is an example of
- The standard
-
One reference about the duality
RL
/PGM
andMaxEnt RL
:- "Reinforcement learning and control as probabilistic inference: Tutorial and review" by (Levine, 2018).
"In this article, we will discuss how a generalization of the
RL
or optimal control problem, which is sometimes termedMaxEnt RL
, is equivalent to exact probabilistic inference in the case of deterministic dynamics, and variational inference in the case of stochastic dynamics." -
Another idea: add a second generated / emitted variable.
- So far, the
observation
variable is generated/emitted from thehidden state
. Here observations arecamera
andlidar
scans. - Here, a second variable is emitted ("decoded" is the term used by the authors): a semantic bird-eye mask, noted
mt
. - It contains semantic meanings of the environment in a human understandable way:
- Drivable areas and lane markings.
Waypoints
on the desired route.- Detected objects (bounding boxes) and their history.
- Ego bounding box.
- It makes me think of the image-like scene representations used in
mid-to-end
approaches. E.g.chaufferNet
. -
"At training time we need to provide the ground truth labels of the mask, but at test time, the mask can be decoded from the latent state, showing how the system is understanding the environment semantically."
- So far, the
-
One note:
- The author call "
latent
variable" what is sometimes referred to ashidden state
inPGM
(e.g. inHMM
).
- The author call "
-
Why "joint" model learning?
- The policy is learned jointly with the other
PGM
models.1-
policy
:Ο
(at
|zt
) - I would have conditioned it on theobservation
variable since thehidden
state is by definition not observable.2-
inference
: of the current latent state:p
(zt+1
|x1:t+1
,a1:t
).3-
latent dynamics
:p
(zt+1
|zt
,at
). This is then used for prediction.4-
generative model
forobservation
:p
(xt
|zt
), i.e. the emission of probability from the latent space to theobservation
space.5-
generative model
formask
:p
(mt
|zt
), i.e. the generation of the semantic mask from thehidden state
, to provide interpretability.
- The policy is learned jointly with the other
-
Why calling it "sequential"?
-
"Historical high-dimensional raw observations [
camera
andlidar
frames] are compressed into this low-dimensional latent space with a "sequential" latent environment model." - Actually I am not sure. Because they learn the transition function and are therefore able to estimate how the hidden state evolves.
-
-
One limitation:
- This intermediate representation shows how the model understands the scene.
-
"[but] it does not provide any intuition about how it makes the decisions, because the driving policy is obtained in a
model-free
way." - In this context,
model-based
RL
is deemed as a promising direction. - It reminds me the distinction between
learn to see
(controlled by the presentedmask
) andlearn to decide
.
"End-to-end Reinforcement Learning for Autonomous Longitudinal Control Using Advantage Actor Critic with Temporal Context"
-
[
2019
] [π] [ πUniversity of Surrey
] [ πJaguar Land Rover
] -
[
sampling efficiency
,switching actions
]
Click to expand
In the reward function, the time headway term encourages the agent to maintain a headway close to 2s , while the headway-derivative term rewards the agent for taking actions which bring it closer to the ideal headway. Source. |
Using recurrent units in the actor net leads to a smoother driving style and maintains a closer headway to the 2s target. Source. |
Authors: Kuutti, S., Bowden, R., Joshi, H., Temple, R. De, & Fallah, S.
- Motivations for "headway-keeping", i.e. longitudinal control, using model-free
RL
:1-
Address inherent sampling inefficiency.2-
Address common jerky driving behaviours, i.e. aim at smoother longitudinal trajectories.-
"Without any temporal context given, the agent has to decide the current action without any consideration for the previous actions, sometimes leading to rapid switching between the
throttle
andbrake
pedals."
-
- The task: keep a
2s
time headway from the lead vehicle inIPG CarMaker
.- The
state
consists in:ego-speed
ego-acc
delta speed to leader
time headway to leader
- Personal note: Since the
longitudinal
control of the ego-car has no influence on the speed of the leading vehicle, thetransition
function of thisMDP
should be stochastic.
- The
- One idea for sampling efficiency: "Proxy Network for Vehicle Dynamics".
- For training, the simulator was constrained to running at real-time (timestep =
40ms
). At the same time, model-free methods require many samples. - One idea is to learn the "ego-dynamics", i.e. one part of the
transition
function.input
= [ego-speed
,ego-acc
,previous pedal action
,current pedal action
,road coefficient of friction
]output
=ego-speed
at the next time-step.
- The authors claim that this model (derived with supervised learning), can be used to replace the simulator when training the
RL
agent:-
"The training was completed using the proxy network in under
19
hours." - As noted above, this
proxy net
does not capture the full transition. Therefore, I do not understand how it can substitute the simulator and "self-generate" samples, except if assuming constant speed of the leading vehicle - which would boil down to sometarget-speed-tracking
task instead."- In addition, one could apply
planning
techniques instead oflearning
.
- In addition, one could apply
-
- For training, the simulator was constrained to running at real-time (timestep =
- One idea again jerky action switch:
- The authors add a
16
LSTM
units to the actor network. - Having a recurrent cell provides a temporal context and leads to smoother predictions.
- The authors add a
- How to deal with continuous actions?
1-
The actor network estimates the action value meanΒ΅
and the estimated varianceΟ
.2-
This transformed into a Gaussian probability distribution, from which the control action is then sampled.
- About the "Sequenced" experience replay:
- The
RL
agent is trainoff-policy
, i.e. the experience tuples used for updating the policy were not collected from the currently-updated policy, but rather drawn from a replay buffer. - Because the
LSTM
has an internal state, experience tuples <s
,a
,r
,s'
> cannot be sampled individually. -
"Since the network uses
LSTMs
for temporal context, these minibatches are sampled as sequenced trajectories of experiences."
- The
- About the "Trauma memory":
- A second set of experiences is maintained and used during training.
- This "Trauma memory" stores trajectories which lead to collisions.
- The ratio of
trauma memory
samples to experiencereplay samples
is set to1/64
.
"Multi-lane Cruising Using Hierarchical Planning and Reinforcement Learning"
Click to expand
The high-level behavioural planner (BP ) selects a lane while the underlying motion planner (MoP ) select a corridor . Source. |
The decision-making is hierarchically divided into three levels. The first two (BP and MoP ) are learning-based. The keep-lane and switch-lane tasks are achieved using a shared MoP agent. The last module that decides of low-level commands such as throttle and steering is left rule-based since it is car-specific. Source. |
Authors: Rezaee, K., Yadmellat, P., Nosrati, M. S., Abolfathi, E. A., Elmahgiubi, M., & Luo, J.
- Motivation:
- Split the decision-making process based on different levels of abstraction. I.e. hierarchical planning.
- About the hierarchical nature of "driving" and the concept of "symbolic punctuation":
-
"Different from regular robotic problems, driving is heavily symbolically punctuated by signs and rules (e.g.
lane markings
,speed limit signs
,fire truck sirens
,turning signals
,traffic lights
) on top of what is largely a continuous control task." - In other words,
higher level
decisions on discrete state transitions should be coordinated withlower level
motion planning and control in continuous state space.
-
- About the use of learning-based methods for low-level control:
- The authors mentioned, among other, the work of
Mobileye
, whereRL
is used in the planning phase to model the vehicleβsacceleration
given results from the prediction module.-
"Given well-established controllers such as
PID
andMPC
, we believe that learning-based methods are more effective in the high and mid-level decision making (e.g.BP
andMotion Planning
) rather than low-level controllers."
-
- The authors mentioned, among other, the work of
- Another concept: "
skill
-based" planning:- It means that planning submodules are specialized for a driving sub-task (e.g.
lane keeping
,lane switching
). - The authors introduce a road abstraction: a
lane
(selected by theBP
) is divided into (5
)corridors
(selected by theMoP
).- Corridor selection is equivalent to selecting a path among a set of predefined paths.
- It means that planning submodules are specialized for a driving sub-task (e.g.
- Proposed structure:
1-
The Behavioural planner (BP
) outputs a high-level decisions in {keep lane
,switch to the left lane
,switch to the right lane
}.- Also, some target speed??? No, apparently a separated module sets some
target speed set-point
based on theBP
desire and the physical boundaries, such as heading cars, or any interfering objects on the road - Not clear to me.
- Also, some target speed??? No, apparently a separated module sets some
2-
The Motion planners (MoP
) outputs atarget corridor
and atarget speed
.3-
A separated and non-learnt trajectory controller converts that into low-level commands (throttle
andacceleration
)
- About the high-level
Option
: How to definetermination
?-
"The typical solution is to assign a fixed expiration time to each option and penalize the agent if execution time is expired."
- According the authors,
BP
should not wait until its command gets executed (since any fixed lifetime forBP
commands is dangerous).-
"BP should be able to update its earlier decisions, at every time step, according to the new states."
-
-
- How to train the two modules?
-
"We design a coarse-grained reward function and avoid any fine-grained rules in our reward feedback."
- The reward function of the
MoP
depends on theBP
objective:- Reward
+1
is given to theMoP
agent if being in the middle corridor (ok, but is the target lane considered?? Not clear to me),AND
:EITHER
the speed of the ego vehicle is within a threshold of theBP
target speed,OR
the minimum front gap is within a threshold of the safe distanced
(computed based onTTC
).
- Reward
- The
MoP
is first trained with randomBP
commands (target lanes are sampled every20s
). Then theBP
agent is trained: its gets positive rewards for driving above a threshold while being penalized at each lane change. - Main limitation:
- Training is done separately (both
BP
andMoP
agents cannot adapt to each other, hence the final decision might be sub-optimal).
- Training is done separately (both
-
- Another advantage: alleged easy
sim-to-real
transfer.-
"In practice, low-level policies may result in oscillatory or undesirable behaviours when deployed on real-world vehicles due to imperfect sensory inputs or unmodeled kinematic and dynamic effects."
-
"Our state-action space abstraction allows transferring of the trained models from a simulated environment with virtually no dynamics to the one with significantly more realistic dynamics without a need for retraining."
-
-
"Learning to Drive using Waypoints"
Click to expand
The communication of navigation goals at intersection is not done using high-level commands (c.f. conditional RL ), but rather by giving the PPO agent a list of predefined waypoints to follow. Source. |
Authors: Agarwal, T., Arora, H., Parhar, T., Deshpande, S., & Schneider, J.
- One idea (alternative to
conditional learning
):-
"Past approaches have used a higher-level planner that directs the agent using high-level commands on turning. Instead of this, we propose to use trajectory waypoints to guide navigation, which are readily available in real world autonomous vehicles."
- Using such a predefined path probably constrains applications to single-lane scenarios, and relies on an up-to-date HD map.
-
"We acknowledge that both the baseline methods use higher level navigation features and RGB images in contrast to richer low-level waypoint features and simpler semantically segmented images used in our approach."
-
- The agent policy network (
PPO
) takes two inputs:1-
The bottleneck embedding of an auto-encoder applied on semantically segmented images.2-
The comingn
waypoints to follow. They are2m
-spaced and are extracted from an offline map.- Both are independant of the realistism of the simulator it has trained on. One could therefore expect the approach to transfer well to real-world.
"Social Attention for Autonomous Decision-Making in Dense Traffic"
Click to expand
Weights in the encoding linear are shared between all vehicles. Each encoding contains individual features and has size dx . For each head in the stack, different linear projections (Lq , Lk , Lv ) are applied on them. Results of projections are key and values (plus a query for the ego-agent). Based on the similarity between the ego-query q0 and the key s, an attention matrix is built. This matrix should select a subset of vehicles that are important, depending on the context. It is multiplied with the concatenation of the individual values features , and then passed to a decoder where results from all heads are combined. The output are the estimated q-values .. Source. |
Example with a stack of two heads. Both direct their attention to incoming vehicles that are likely to collide with the ego-vehicle. Visualization of the attention matrix : The ego-vehicle is connected to every vehicle by a line whose width is proportional to the corresponding attention weight. The green head is only watching the vehicles coming from the left, while the blue head restricts itself to vehicles in the front and right directions.. Source. |
Authors: Leurent, E., & Mercat, J.
- Question: About the
MDP
state
(or representation of driving scene): how surrounding vehicles can be represented? - Motivations / requirements:
1-
Deal with a varying number of surrounding vehicles (problematic with function approximation which often expects constant-sized inputs).2-
The driving policy should be permutation-invariant (invariant to the ordering chosen to describe them).3-
Stay accurate and compact.
- Current approaches:
- List of features representation (fails at
1
and2
).Zero-padding
can help for varying-size inputs.
- Spatial grid representation (suffers from accuracy-size trade-off).
- List of features representation (fails at
- One concept: "Multi-head social attention mechanism".
- The
state
may contain many types of information. But the agent should only pay attention to vehicles that are close or conflict with the planned route.-
"Out of a complex scene description, the model should be able to filter information and consider only what is relevant for decision."
-
- About "attention mechanism":
-
"The attention architecture was introduced to enable neural networks to discover interdependencies within a variable number of inputs".
- For each head, a stochastic matrix called the
attention matrix
is derived. - The visualisation of this attention matrix brings interpretability.
-
- The
"End-to-End Model-Free Reinforcement Learning for Urban Driving using Implicit Affordances"
Click to expand
An encoder is trained to predict high-level information (called affordances ). The RL agent does not use directly them as input state but rather one layer before (hence implicit affordances). This compact representation offers benefits for interpretability, and for training efficiency (lighter to save in the replay buffer). A command {follow lane , turn left /right /straight , change lane left /right } for direction at intersection and lane changes is passed to the agent via a conditional branch. Source. |
Augmentation is needed for robustness and generalization (to address the distribution mismatch - also in IL ) (left). Here, the camera is moved around the autopilot. One main finding is the benefit of using adaptive target speed in the reward function (right). Source. |
Authors: Toromanoff, M., Wirbel, E., & Moutarde, F.
-
One quote:
-
"A promising way to solve both the data efficiency (particularly for DRL) and the black box problem is to use privileged information as auxiliary losses, also coined "affordances" in some recent papers."
-
-
One idea: The
state
of theRL
agent are affordance predictions produced by a separated network.1-
An encoder in trained in a supervised way to predict high-level information, using a stack of images as input.- Two main losses for this supervised phase:
1-
Traffic light state (binary classification).2-
semantic segmentation.
- Infordances inclues:
- Semantic segmentation maps (ground-truth available directly in CARLA).
- Distance and state of the incoming traffic light.
- Whether the agent is at an intersection or not.
- Distance from the middle of the lane.
- Relative rotation to the road.
- Why "implicit" affordance?
-
"We coined this scheme as "implicit affordances" because the RL agent implicit affordances because the RL agent do not use the explicit predictions but have only access to the implicit features (i.e the features from which our initial supervised network predicts the explicit affordances)."
- Hence multiple affordances are predicted in order to help the supervised training (similar to
auxiliary learning
).
-
- This feature extractor is frozen while training the
RL
agent.
- Two main losses for this supervised phase:
2-
These predicted affordances serve as input to a model-freeRL
agent.- The dueling network of the
Rainbow- IQN Ape-X
architecture was removed (very big in size and no clear improvement). - Training is distributed:
-
"CARLA is too slow for RL and cannot generate enough data if only one instance is used."
-
- Despite being limited to discrete actions (
4
forthrottle
/brake
and9
-24
forsteering
), it shows very good results.-
"We also use a really simple yet effective trick: we can reach more fine-grained discrete actions by using a bagging of multiple [here
3
consecutive] predictions and average them."
-
- Frequency of decision is not mentioned.
- The dueling network of the
- Benefits of this architecture:
- Affordances features are way lighter compared to images, which enables the use of a replay buffer (off-policy).
- All the more, since the images are bigger than usual (
4
288
x288
frames are concatenated, compared to "classical" single84
x84
frames) in order to capture the state of traffic lights.
- All the more, since the images are bigger than usual (
- This decomposition also brings some interpretable feedback on how the decision was taken (affordances could also be used as input of a rule-based controller).
- Affordances features are way lighter compared to images, which enables the use of a replay buffer (off-policy).
- This
2
-stage approach reminds me the concept"learn to see"
/"learn to act"
concept of (Chen et al. 2019) in"Learning by Cheating"
.-
"As expected, these experiments prove that training a large network using only
RL
signal is hard".
-
-
About "model-free"
RL
.- As opposed to "model-based", e.g. (Pan et al. 2019) where a network is also trained to predict high-level information.
- But these affordances rather relate to the transition model, such as probability of collision or being off-road in the near futures from a sequence of observations and actions.
-
About the reward function:
- It relies on
3
components:1-
Desiredlateral position
.- To stay in the middle of the lane.
2-
Desiredrotation
.- To prevent oscillations near the center of lane.
3-
Desiredspeed
, which is adaptive.-
"When the agent arrives near a red traffic light, the desired speed goes linearly to
0
(the closest the agent is from the traffic light), and goes back to maximum allowed speed when it turns green. The same principle is used when arriving behind an obstacle, pedestrian, bicycle or vehicle." - The authors find that without this adaptation, the agent fails totally at braking for both cases of red traffic light or pedestrian crossing.
-
- It relies on
"Hierarchical Reinforcement Learning Method for Autonomous Vehicle Behavior Planning"
-
[
2019
] [π] [ πCarnegie Mellon
] [ πGeneral Motors
] -
[
Hierarchical RL
]
Click to expand
Driving is treated as a task with multiple sub-goals. A meta controller choose an option among STOP AT STOP-LINE , FOLLOW FRONT VEHICLE . Based on this sub-goal, a low-level controller decides the THROTTLE and BRAKE action. Source. |
The state is shared across different hierarchical layers (option and action ). An attention model extracts the relevant information for the current sub-goal . Source. |
Authors: Qiao, Z., Tyree, Z., Mudalige, P., Schneider, J., & Dolan, J. M.
- Motivations:
1-
Have an unique learning-based method (RL
) for behavioural decision-making in an environment where the agent must pursue multiple sub-goals.2-
Improve convergence speed and sampling efficiency of traditionalRL
approaches.- These are achieved with hierarchical
RL
, by reusing the different learnt policies across similar tasks.
- About the hierarchical structure: concepts of "Actions" and "Options".
1-
First, a high-level option, seen as asub-goal
, is selected by some "meta-controller", depending on the scenario:Option_1
=STOP AT STOP-LINE
.Option_2
=FOLLOW FRONT VEHICLE
.
2-
Based on the selectedsub-goal
, an action network generates low-level longitudinal "actions" aboutthrottle
andbrake
.- The continues until a next sub-goal is generated by the meta controller.
- About the
state
:- The authors decide to share one state set for the whole hierarchical structure.
- For each sub-goal, only relevant information is extracted using an attention mechanism:
-
"An attention model is applied to define the importance of each state element
I
(state
,option
) with respect to each hierarchical level and sub-goal". - For instance:
-
"When the ego car is approaching the front vehicle, the attention is mainly focused on
dfc
/dfs
." (1
-
distance_to_the_front_vehicle
/
safety_distance
) -
"When the front vehicle leaves without stopping at the stop-line, the ego car transfers more and more attentions to
ddc
/dds
during the process of approaching the stop-line." (1
-
distance_to_stop_sign
/
safety_distance
)
-
- About the reward function:
- The authors call that "hybrid reward mechanism":
- An "
extrinsic
meta reward" for theoption
-level. - An "
intrinsic
reward" for theaction
-level, conditioned on the selectedoption
.
- An "
- For instance, the terms related to the
target_speed
in the intrinsic reward will be adapted depending if the meta controller defines ahighway driving
or anurban driving
sub-goal. - At first sight, this reward function embeds a lot of parameters.
- One could say that the effort parameter engineering in rule-based approaches has been transferred to the reward-shaping.
- The authors call that "hybrid reward mechanism":
- One idea: "Hierarchical Prioritized Experience Replay" (
HPER
).- When updating the weights of the two networks, transitions {
s
,o
,a
,ro
,ra
,s'
} are considered. -
But "If the output of the option-value network
o
is chosen wrongly, then the success or failure of the corresponding action-value network is inconsequential to the current transition." - In other words, for a fair evaluation of the
action
, it would be nice to have the correct option. - One solution consists in setting priorities when sampling from the replay buffer:
- In the option-level, priorities are based on the error directly (the
TD-error
for theoptionNet
, similar to the traditionalPER
). - In the lower level, priorities are based on the difference between errors coming from two levels, i.e. the difference between (
TD-error
for theactionNet
) and (TD-error
for theoptionNet
).
- In the option-level, priorities are based on the error directly (the
- When updating the weights of the two networks, transitions {
- About the simulator:
VIRES VTD
.
"DeepRacer: Educational Autonomous Racing Platform for Experimentation with Sim2Real Reinforcement Learning"
-
[
sim2real
,end-to-end
,sampling efficiency
,distributed RL
]
Click to expand
real2sim experiment for model-free RL end-to-end (from monocular camera to low-level controls) track following using a 1/18th scale car. Source. |
Comparison of tools to perform RL sim2real applications. Source. |
Authors: Balaji, B., Mallya, S., Genc, S., Gupta, S., Dirac, L., Khare, V., Roy, G., Sun, T., Tao, Y., Townsend, B., Calleja, E., Muralidhara, S. & Karuppasamy, D.
- What?
-
"
DeepRacer
is an experimentation and educational platform forsim2real
RL
."
-
- About the
POMDP
and the algorithm:- Observation:
160x120
grayscale front view. - Action:
10
discretized values:2
levels for throttle and5
for steering. - Timestep: One action per observation. At
15 fps
(average speed1.6 m/s
). model-free
(that is important)PPO
to learn the policy.
- Observation:
- Tools:
Amazon RoboMaker
: An extension of theROS
framework with cloud services, to develop and test deploy this robot software.Amazon SageMaker
: The Amazon cloud platform to train and deploy machine learning models at scale using theJupyter
Notebook.Amazon S3
(Simple Storage Service) to save and store the neural network models.- [
OpenAI Gym
] interface between theagent
and theenv
. Intel Coach
RL
framework for easy experimentation.Intel OpenVINO
, a toolkit for quick development of vision-based applications, to convert theTensorflow
models to an optimized binary.Gazebo
robotics simulator.ROS
for the communication between the agent and the simulation.ODE
(Open Dynamics Engine) to simulate the laws of physics using the robot model.Ogre
as graphics rendering engine.Redis
, an in-memory database, as a buffer to store the experience tuples <obs
,action
,reward
,next_obs
>.
- About
sim2real
:- The policy is learnt (+tested) from simulations ("replica track") and then transferred (+tested) to real world.
-
"The entire process from training a policy to testing in the real car takes
< 30
minutes." - The authors mention several approaches for
sim2real
:- Mixing
sim
andreal
experiences during training:- E.g. learn features from a combination of simulation and real data.
- Mix expert demonstrations with simulations.
- Model-based dynamics transfer:
- Assess simulation bias.
- Learn model ensembles.
- Perform calibration: here, they have to match the robot model to the measured dimensions of the car.
- Domain randomization: simulation parameters are perturbed during training to gain robustness.
- Add adversarial noise.
- Observation noise, such as random colouring.
- Action noise (up to
10%
uniform random noise tosteering
andthrottle
). - Add "reverse direction of travel" each episode (I did not understand).
Privileged learning
:- E.g. learn semantic segmentation at the same time, as an auxiliary task.
- Mixing
- They note that (I thought they were using grayscale?):
-
"Random colour was the most effective method for
sim2real
transfer."
-
- About the distributed rollout mechanism, to improve the sampling efficiency:
-
"We introduce a training mechanism that decouples
RL
policy updates with the rollouts, which enables independent scaling of the simulation cluster." - As I understand, multiple simulations are run in parallel, with the same dynamics model.
- Each worker sends its experience tuples (not the gradient) to the central
Redis
buffer where updates of the shared model are performed.
-
"Autonomous Driving using Safe Reinforcement Learning by Incorporating a Regret-based Human Lane-Changing Decision Model"
-
[
2019
] [π] [ πMichigan State University, Clemson University
] -
[
safe RL
,action masking
,motivational decision model
,regret theory
,CARLA
]
Click to expand
Every time after the RL agent chooses an action, the supervisor checks whether this action is safe or not. It does that by predicting the trajectories of other cars using a regret decision model. Source. |
The two actions (keep lane and change lane ) have different probabilities of occurrence, and different harms (costs) (I ) that can be formulated as utilities (II ), expressed with physical variables (III ). Affordance indicators are used in the world representation. Source. |
Authors: Chen, D., Jiang, L., Wang, Y., & Li, Z.
- Main motivation for
safeRL
for two-lane highway scenarios:-
"We found that when using conventional
RL
, about14.5%
training epochs ended with collisions. [...] Collisions cause training unstable as the RL algorithm needs to reset the simulation whenever collisions happen."
-
- One idea: A "safety supervisor" evaluates the consequences of an ego-action and decides whether an it is "safe" using a prediction model.
- "What is predicted?"
- The
lane-changing
decisions of other cars.
- The
- "How?"
- The approach belongs to the motivational methods, hence explainable (as opposed to the data-driven methods).
- A "regret decision model" is developed:
-
"
Risks
in driving have two dimensions:harm
(=costs
) andprobabilities
." - A
probability
(relating to thetime-to-collision
), aweighting effect
function (non-linear function of the probability), acost
function, autility
(cost normalized bycollision cost
) and aregret
(non-linear function of the utility) are defined. - The (deterministic) decision is based on the resulting the
net advantage
(~weighted regret) of optionC
(change) over optionK
(keep lane).
-
- "Can we draw a parallel with how human drivers reason?"
-
"Not all the drivers have experienced collisions, but all of them can perceive the threat of a potential collision. It is plausible to assume that the surrounding driver perceives the threat of the approaching vehicle to be proportional to its kinematic energy." Hence somehow proportional to
speedΒ²
.
-
- "What does
safe
means here?"- Short-horizon rollouts are performed.
1-
The ego trajectory is anticipated based on theaction
candidate.2-
Trajectories are obtained for other all cars using the predictor and their current speeds.
- The action is considered
safe
if the ego agent stays on the road and if his trajectories does not intersect (in spatial-temporal domain) with any other trajectory.
- Short-horizon rollouts are performed.
- What if the action is deemed
unsafe
?1-
The supervisor replaces the identified risky action with a safe option (probably based on the model).- This avoids terminating the episode and starting a new one during training and obviously it improves safety during testing.
2-
An experience tuple <state
,action
,r_collision
,*
(no next-action)> is created and added to the experience replay buffer (together with safe tuples). This will be sampled during the update phase.- It reminds me the idea of another work by the authors ("Deep Q-Learning with Dynamically-Learned Safety Module : A Case Study in Autonomous Driving" - detailed below) where two distinct buffers (
safe
andcollision
) are maintained and equally sampled.
- It reminds me the idea of another work by the authors ("Deep Q-Learning with Dynamically-Learned Safety Module : A Case Study in Autonomous Driving" - detailed below) where two distinct buffers (
- "What values for the model parameters?"
- Ideally, these values should be inferred for each observed car (they are parts of the hidden state). For instance, within a
POMDP
formulation, using a belief tracker. - Here, a single set of parameters is derived via
max-likelihood
based on human demonstration.
- Ideally, these values should be inferred for each observed car (they are parts of the hidden state). For instance, within a
- "What is the impact on training?"
- It is faster and more stable (almost constant improving rate) compared to conventional
RL
.
- It is faster and more stable (almost constant improving rate) compared to conventional
- "What is predicted?"
- Another idea: hierarchical decision-making:
- The
RL
agent first selects "manoeuvre-like" decisions among {decelerating
,cruising
,accelerating
} and {lane change
,lane keeping
}. - Its decision is then implemented by a low-level controller (
PID
).-
"This hierarchical design greatly reduces the training time compared to methods using agents to output control signals directly."
-
- The
- One related previous work (about regret decision model):
- "A human-computer interface design for quantitative measure of regret theory" - (Jiang L. & Wang Y., 2019).
"Learning Resilient Behaviors for Navigation Under Uncertainty Environments"
-
[
2019
] [π] [ποΈ] [ποΈ] [ πFuzhou University, University of Maryland, University of Hong Kong
] [ πBaidu
] -
[
uncertainty estimation
,uncertainty-aware policy
,SAC
]
Click to expand
The confidence in the prediction (left) is used as an uncertainty estimate. This estimate impacts the decision (ΞΌ = mean of the steering action distribution) of the agent. Source. |
The variance of the steering action distribution (behavioural uncertainty ) is not estimated by the agent itself, but rather built by a simple mapping-function from the environmental uncertainty estimated by the prediction module. Source. |
Authors: Fan, T., Long, P., Liu, W., Pan, J., Yang, R., & Manocha, D.
- One motivation: derive an uncertainty-aware +
model-free
+RL
policy.- "Uncertainty-aware":
- The core idea is to forward the observation uncertainty to the behaviour uncertainty (i.e. uncertainty of the action distribution), in order to boost exploration during training.
Model-free
as opposed tomodel-based
RL
methods.- In
model-based
methods, the collision probability and uncertainty prediction can [explicitly] be formulated into the risk term for anMPC
to minimize. - Here, the action selection is directly output by a net, based on raw laser data (i.e. not from some
MPC
that would require challenging parameter tuning).
- In
RL
as opposed toIL
(where uncertainty estimation has already been applied, by recycling techniques of supervised learning).- Besides, in
IL
, it is difficult to learn policies to actively avoid uncertain regions.
- Besides, in
- "Uncertainty-aware":
- The proposed framework consists in
3
parts:1-
A prediction net:- Inputs: A sequence of laser scans and velocity data.
- Outputs:
1-1.
The motion prediction of surrounding environments (this seems to be trajectories).1-2.
Some associated uncertainty: "we expect to measure the environmental uncertainty by computing the confidence of the prediction results."
- It is trained with supervised learning with recorded trajectories. The loss function therefore discourages large uncertainty while minimizing prediction errors.
2-
A policy net:- Inputs:
2-1.
The two estimates (uncertainty
andmotion information
) of the predictor.2-2.
The currentlaser scan
, the currentvelocity
and the relativegoal position
.
- Outputs: The
mean
* of the action distribution (steering
). - It is trained with model-free
RL
, using theROS
-basedStage
simulator.
- Inputs:
3-
A parallel NON-LEARNABLEmapping function
estimates thevariance
of that action distribution directly from the environmental uncertainty.- Input: The "predicted" environmental uncertainty. (I would have called it
observation uncertainty
). - Output: The
variance
of the action distribution (steering
). - About the mapping: The uncertainty predictions are weighted according to the distance of each laser point:
-
"We consider that the closer the laser point, the higher the impact on the action."
-
- Again, the variance of the action distribution is not learnt!
- It reminds me the work of (Galias, Jakubowski, Michalewski, Osinski, & Ziecina, 2019) where best results are achieved when the policy outputs both the
mean
andvariance
. - Here, the policy network should learn to adaptively generate the
mean
value according to thevariance
value (capturing the environment uncertainty), e.g. exhibit more conservative behaviours in the case of high environmental uncertainty.
- It reminds me the work of (Galias, Jakubowski, Michalewski, Osinski, & Ziecina, 2019) where best results are achieved when the policy outputs both the
- Input: The "predicted" environmental uncertainty. (I would have called it
- About the
RL
method:SAC
= Soft Actor-Critic.- The above defined mapping forwards environmental uncertainties to the action variance.
- The idea is then to encourage the agent to reduce this action variance (
distribution entropy
) in order to obtain some "uncertainty-averse" behaviour. SAC
is appropriate for that:
- The idea is then to encourage the agent to reduce this action variance (
-
"The key idea of
SAC
is to maximize theexpected return
andaction entropy
together instead of the expected return itself to balance the exploration and exploitation."SAC
trains a stochastic policy with entropy regularization, and explores in an on-policy way.- My interpretation:
1-
The agent is given (non-learnable) anaction variance
from the uncertainty mapping.2-
This impacts its objective function.3-
It will therefore try to decrease this uncertainty of action distribution and by doing so will try to minimize the environmental uncertainty.4-
Hence more exploration during the training phase.
- Similar to the
Ξ΅
-greedy annealing process inDQNs
, thetemperature
parameter is decayed during training to weight between the two objectives (entropy
of policy distribution andexpected return
).SAC
incorporates theentropy
measure of the policy into the reward to encourage exploration, i.e. the agent should act as randomly as possible [encourage uniform action probability] while it is still able to succeed at the task.
- The above defined mapping forwards environmental uncertainties to the action variance.
- Bonus (not directly connected to their contributions): How to model uncertainties in
DNN
s?-
"The
aleatoric
uncertainty (data uncertainty) can be modelled by a specific loss function for the uncertainty term in the network output". -
"The
epistemic
uncertainty (i.e. model uncertainty) can be captured by the Monte-Carlo Dropout (MC
-Dropout) technique" - dropout can be seen as a Bayesian approximation.
-
"Deep Q-Learning with Dynamically-Learned Safety Module : A Case Study in Autonomous Driving"
-
[
2019
] [π] [ πUniversity of Michigan and West Virginia University
] [ πFord
] -
[
DQN
]
Click to expand
The affordance indicator refers to the MDP state . It has length 20 and contains and represents the spatio-temporal information of the nearest traffic vehicles. The agent controls the discrete acceleration (maintain , accelerate , brake , and hard brake ) and selects its lane (keep lane , change to right , and change to left ). Source. |
Two purposes: 1- Accelerates the learning process without inhibiting meaningful exploration. 2- Learn to avoid accident-prone states. Note that collisions still occur, but less often. Source. |
Authors: Baheri, A., Nageshrao, S., Kolmanovsky, I., Girard, A., Tseng, E., & Filev, D.
- Main motivation: guide the exploration process when learning a
DQN
-based driving policy by combining two safety modules (rule-based
andlearning-based
):- A handcrafted safety module.
- A heuristic rule ensures a minimum relative gap to a traffic vehicle.
- A dynamically-learned safety module.
1-
It first predicts future states within a finite horizon.2-
And then determines if one of the future sates violates the handcrafted safety rule.
- A handcrafted safety module.
- One idea for
DQN
: Two buffers are used to store experience (off-policy learning). They are equally sampled in the update stage.1-
Safe buffer: As in classic DQN.2-
Collision buffer: The TD-target only consists in the observedreward
(i.e. thes
is a terminal state).- An experience tuple ends in this collision buffer:
1-
If thehandcrafted
safety module considers the action is not safe or if the action leads to a static collision in the next observed state.2-
If thedynamically-learned
safety module detects a dynamic collision for any future predicted states. In that case, an additional negative term is assigned to the reward.
- An experience tuple ends in this collision buffer:
- Although the sampling is uniform in each buffer, the use of two buffer can been seen as some variant of prioritized experience replay (
PER
) where the sampling is biased to expose the agent to critical situations. - Contrary to
action masking
, the "bad behaviours" are not discarded (half the batch is sampled from the collision buffer).- The agent must therefore generalize the states that could lead to undesirable states.
- Relation to
model-based RL
:- The transition function of the
MDP
is explicitly learnt by theRNN
(mapping <a
,s
> tos'
). - The dynamically-learned safety module incorporates a model lookahead. But prediction is not used for explicit planning.
- Instead, it determines whether the future states lead to undesirable behaviours and consequently adapts on-the-fly the reward function.
- The transition function of the
- Related works:
- "Autonomous Highway Driving using Deep Reinforcement Learning" - (Nageshrao, Tseng, & Filev, 2019).
- Basically, the same approach is presented.
-
"Training a standard
DDQN
agent without explicit safety check could not learn a decent policy and always resulted in collision. [...] Even withcontinuous adaptation
the mean number safety trigger never converges to zero." - Since the
NN
function approximation can potentially chose a non-safe action, the agent should be augmented with some checker that detects safety violation. - The authors introduce the idea of a "short horizon safety checker".
- If the original
DDQN
action choice is deemed unsafe, then the safety check replaces it with an alternate "safe" action, in this case relying on anIDM
controller. - This technic enhances learning efficiency and without inhibiting meaningful exploration
- For additional contents on "Safe-RL in the context of autonomous vehicles", the first author @AliBaheri has written this github project.
- "Autonomous Highway Driving using Deep Reinforcement Learning" - (Nageshrao, Tseng, & Filev, 2019).
"Simulation-based reinforcement learning for autonomous driving"
-
[
2019
] [π] [ποΈ] [ποΈ] [ πUniversities of Warsaw and Jagiellonian
] [ πdeepsense.ai
] -
[
sim-to-real
,end-to-end
,CARLA
]
Click to expand
Neural architecture of the policy function trained with PPO : the RGB image is concatenated with its semantic segmentation. Randomisation is performed to prevent over-fitting and increase sampling-efficiency. It is also worth mentioning the high-level navigation command that is provided to guide the agent when approaching intersections. Source. |
Several option for producing the std of the steering distribution. Best results are achieved when the policy outputs both mean and std . The left screenshot illustrates that shaped rewards (as opposed to sparse rewards where rewards are only five at the goal state) can bias learning and lead to un-intended behaviours: to make the agent stay close to the centre line, the authors originally penalized the gap in X , Y but also Z coordinates. ''Due to technical reasons our list of lane-centre positions was actually placed above the road in the Z axis. This resulted in a policy that drives with two right side wheels placed on a high curb, so its elevation is increased and distance to the centre-line point above the ground is decreased''. Source-1 Source-2. |
Authors: Galias, C., Jakubowski, A., Michalewski, H., Osinski, B., & Ziecina, P.
- One goal: learn the continuous
steering
control of the car to stay on its lane (no consideration of traffic rules) in anend-to-end
fashion usingRL
.- The
throttle
is controlled by aPID
controller with constant speed target. - In addition, the simulated environment is static, without any moving cars or pedestrians.
- The authors want to test how good a policy learnt in simulation can transfer to real-world. This is sometimes called
sim-to-real
.- For this second motivation, this reminds me the impressive
Wayve
's work "Simulation Training, Real Driving".
- For this second motivation, this reminds me the impressive
- The
- How to model the standard deviation parameter of the continuous
steering
action distribution?- It could be set it to a constant value or treated as an external learnable variable (detached from the policy).
- But the authors found that letting the policy control it, as for the
mean
, gave the best results.- It allows the policy to adjust the degree of exploration on a per-observation basis.
-
"An important implementation detail was to enforce an upper boundary for the standard deviation. Without such a boundary the standard deviation would sometime explode and never go down below a certain point (the entropy of the policy climbs up), performing poorly when deployed on real-world cars."
- An interesting variant:
end-to-mid
, i.e. do not directly predict raw control commands.- In another work, the task was not to directly predict the
CARLA
steering
command, but rather some target waypoint on the road, and "outsource" the steering control task to some external controller. -
"Given a target waypoint, low-level steering of the driving wheel is executed in order to reach this point. In simulation, it is realized by a
PID
controller while in the case of the real car, we use a proprietary control system. The action space is discrete - potential waypoints are located every5
degrees betweenβ30
and30
, where0
is the current orientation of the vehicle."
- In another work, the task was not to directly predict the
- Two ideas: The use of
3
sources of observations. And the inclusion of segmentation mask to thestate
(input) of theRL
net:1-
ARGB
image.- It is concatenated by its semantic segmentation: it passes through a previous-learnt segmentation network.
- This can be thought as an intermediate human-designed or learned representations of the real world.
- As said, the
seg-net
has been learnt before with supervised learning.- But it could also be (further) trained online, at the same time as the policy, leading to the promising concept of
auxiliary learning
. - This has been done for instance by (Tan, Xu, & Kong, 2018), where a framework of
RL
with image semantic segmentation network is developped to make the whole model adaptable to reality.
- But it could also be (further) trained online, at the same time as the policy, leading to the promising concept of
2-
A high-level navigation command to guide the agent when approaching an intersection.- In {
LANE FOLLOW
,GO STRAIGHT
,TURN RIGHT
,TURN LEFT
}. - This is inspired by (Codevilla et al., 2017).
- In {
3-
The currentspeed
andacceleration
.- The authors tried to also include the information about the last action.
- Without success: the car was rapidly switching between
extreme left
andextreme right
. - In other words, the steering was controlling in a pulse width modulation-like manner.
- Without success: the car was rapidly switching between
- One idea to promote generalisation and robustness: Randomisation.
- It should also prevent overfitting and improve sample efficiency.
- Some randomizations are applied to visual camera input.
- An additional loss term is introduce to check if the policy outputs a similar distribution for the perturbed and unperturbed images.
- Some perturbations are also applied to the car dynamics.
- One idea for debug: generate saliency map.
- The idea is to find which region of the image has the most impact in the prediction of the steering command.
- This can be done by blurring different patches of the input image, i.e. removing information from that patch, and measuring the output difference.
- Some good ideas mentioned for future works:
- To improve the driving stability, try to focus the training on fragments of scenarios with the highest uncertainty. c.f. concepts of Bayesian
RL
. - Go to
mid-to-end
, using an intermediate representation layer, for example a2D
-map or a bird's-eye view. e.g.ChauffeurNet
- also detailed on this page. - To further improve the sampling efficiency, model-based methods could be integrated. c.f. "Wayve Simulation Training, Real Driving"
- To improve the driving stability, try to focus the training on fragments of scenarios with the highest uncertainty. c.f. concepts of Bayesian
"Dynamic Interaction-Aware Scene Understanding Reinforcement Learning in Autonomous Driving"
-
[
2019
] [π] [ πUniversity of Freiburg
] [ πBMW
] -
[
feature engineering
,graph neural networks
,interaction-aware networks
,SUMO
]
Click to expand
Some figures:
In the proposed DeepSet approach, embeddings are first created depending on the object type (using Ο1 for vehicles and Ο2 for lanes), forming the encoded scene . They are 'merged' only in a second stage to create a fixed vector representation. Deep Set can be extended with Graph Convolutional Networks when combining the set of node features to capture the relations - interaction - between vehicles. Source. |
Authors: Huegle, M., Kalweit, B., Werling, M., & Boedecker, J.
- Two motivations:
1-
Deal with an arbitrary number of objects or lanes.- The authors acknowledge that a fix-size state will be enough for scenarios like highways driving where interactions with the direct neighbours of the agent are most important.
- But they also note that a variable-length list can be very important in certain situations such as urban driving.
- To deal with the variable-length dynamic input set
X-dyn
, there encodings are justsummed
.- This makes the
Q-function
permutation invariant w.r.t. the order of the dynamic input and independent of its size.
- This makes the
- The authors acknowledge that a fix-size state will be enough for scenarios like highways driving where interactions with the direct neighbours of the agent are most important.
2-
Model interactions between objects in the scene representations.- The structure of Graph Convolutional Networks (
GCN
) is used for that. All node features are combined by thesum
. -
"
Graph Networks
are a class of neural networks that can learn functions on graphs as input and can reason about how objects in complex systems interact."
- The structure of Graph Convolutional Networks (
- Baselines:
-
"Graph-Q is compared to two other interaction-aware
Q
-learning algorithms, that use input modules originally proposed for supervised vehicle trajectory prediction."- Convolutional Social Pooling (
SocialCNN
) is using a grid-map: "a social tensor is created by learning latent vectors of all cars by an encoder network and projecting them to a grid map in order to learn spatial dependencies". - Vehicle Behaviour Interaction Networks (
VBIN
) imposes working with a fixed number of cars since the embedding vectors are just concatenated, i.e. not summarizing as in the Deep Sets approach.
- Convolutional Social Pooling (
- A built-in
SUMO
rule-based controller is also used for comparison.
-
- Previous works:
Dynamic input for deep reinforcement learning in autonomous driving
- detailed below.- Introducing the idea of
Deep Set
.
- Introducing the idea of
High-level Decision Making for Safe and Reasonable Autonomous Lane Changing using Reinforcement Learning
- detailed below.- How to ensure safety when working with a
DQN
? - The concept of
action masking
is applied, i.e. the technique of not exposing to the agent dangerous or non-applicable actions during the action selection.
- How to ensure safety when working with a
"Driving in Dense Traffic with Model-Free Reinforcement Learning"
-
[
2019
] [π] [simulator
] [MPC
] [ML
] [ποΈ] [ πBerkeley & Carnegie Mellon
] [ πHonda
] -
[
SISL
,PPO
,MPC
,merging scenarios
]
Click to expand
Some figures:
Source. |
The occupancy-grid-like observation space is divided into 4 channels, each containing 3 lanes. An ego-vehicle specific feature vector is also considered. The authors use policy-gradient Proximal Policy Optimisation - PPO - method and decided not to share parameters between the actor and the critic. Source. |
In another work, the authors try to incorporate an RNN as a prediction model into an MPC controller, leading to a reliable, interpretable, and tunable framework which also contains a data-driven model that captures interactive motions between drivers. Source. |
Authors: Saxena, D. M., Bae, S., Nakhaei, A., Fujimura, K., & Likhachev, M.
- One motivation: learn to perform comfortable merge into dense traffic using model-free RL.
- Dense traffic situations are difficult: traditional rule-based models fail entirely.
- One reason is that in heavy traffic situations vehicles cannot merge into a lane without cooperating with other drivers.
- Model-free means it does not rely on driver models of other vehicles, or even on predictions about their motions. No explicit model of inter-vehicle interactions is therefore needed.
- Model-free also means that natively, safety cannot be guaranteed. Some masking mechanisms (called "overseer") are contemplated for future work.
- Dense traffic situations are difficult: traditional rule-based models fail entirely.
- One idea for merging scenarios:
-
Many other works "only accommodate a fixed merge point as opposed to the more realistic case of a finite distance for the task (lane change or merge) as in our work."
-
- One idea to adapt
IDM
to dense scenarios:-
"
IDM
is modified to include astop-and-go
behaviour that cycles between a non-zero and zero desired velocity in regular time intervals. This behaviour is intended to simulate real-world driving behaviours seen in heavy-traffic during rush-hour."
-
- One idea about action space:
-
"Learning a policy over the acceleration and steering angle of a vehicle might lead to jerky or oscillatory behaviour which is undesirable. Instead, we train our network to predict the time derivatives of these quantities, i.e. jerk
j
and steering rateΞ΄Λ
. This helps us maintain a smooth signal over the true low-level control variables." - The policy for jerk and steering rate is parameterised as Beta distributions (allegedly "this makes training more stable as the policy gradients are unbiased with respect to the finite support of
Ξ²
-distributions").
-
- One of their related works used as baseline:
- "Cooperation-Aware Lane Change Maneuver in Dense Traffic based on Model Predictive Control with Recurrent Neural Network" from (Bae et al., 2019).
-
"A
RNN
generates predictions for the motions of neighbouring vehicles based on a history of their observations. These predictions are then used to create safety constraints for anMPC
optimisation."1
- For the prediction part:- A
Social-GAN
(socially acceptable trajectories) is used to predict the other vehiclesβ interactive motions that are reactive to the ego vehicleβs actions. - Other prediction modules are tested, such as a constant velocity model.
-
"When all drivers are cooperative, all three prediction models can lead to successful lane change. That is, the imprecision of predictions on driversβ interactive motions is not critical when the drivers are very cooperative, since the drivers easily submit space to other vehicles, even with rough control inputs resulting from inaccurate motion predictions. This, however, is no longer valid if the drivers are aggressive."
- A
2
- For the MPC part:- A heuristic algorithm based on Monte Carlo simulation along with a roll-out is used to deal with the non-convexity of the problem (some constraints are non-linear).
- To reduce the search complexity during the sampling phase, the action space is adapted. For instance only steering angles to the left are considered when turning left.
-
"This
SGAN
-enabled controller out-performs the learning-based controller in the success rate, (arguably) safety as measured by minimum distances, and reliability as measured by variances of performance metrics, while taking more time to merge".
"Behavior Planning at Roundabouts"
Click to expand
Some figures:
Using recurrent units in a DQN and considering the action history. Source. |
Hidden modes framework: decomposing the non-stationary environment into multiple stationary environments, where each mode is an MDP with distinct dynamics. Source. |
Author: Khurana, A.
- One idea: using recurrent nets (hence
D
R
DQN
) to integrate history in order to cope with partial observability.- Two
LSTM
layers, (considering15
past observations) was added afterDQN
layers. - They also try to include the action history of the agent, leading to
A
DRQN
. But this does not modify the score.
- Two
- Another idea: several context-based action spaces:
- Decision in roundabouts consists in:
- Merging -
action_space
= {go
,no go
} - Traversal -
action_space
= {acc
,decelerate
,change-left
,change-right
,cv
} - Exit -
action_space
= {acc
,decelerate
,change-left
,change-right
,cv
}
- Merging -
- Justification for using discrete action space: behavioural planning happens on a slower time scale than motion planning or trajectory control.
- This reminds me some works on hierarchical RL (e.g.
Options
framework).
- Decision in roundabouts consists in:
- Another idea: Curriculum learning
-
"Each model is first trained without any other interacting vehicle so that it learns the most optimal policy and later with other vehicles with random initialization. In later stages, an additional bonus reward is given to merging and traversal if they lead to successful exit to enable long-term consistent behaviours."
-
- One note about the
POMDP
formulation:-
"This also enables us to integrate planning and prediction into a single problem, as the agent learns to reason about its future."
- I am a little bit confused by their formulation of the
PO
MDP
.- I would have expected some hidden parameters to be defined and some belief on them to be tracked, as often done for AD, e.g. the intention of other participants.
- Instead, the
partial observability
refers here to the uncertainty in perception: "there is a0.2
probability that a car present in the agentβs perception field is dropped in the observation". - This imperfect state estimation encourages the robustness.
-
- One note about model-free RL:
- Using RL seems relevant to offer generalization in complex scenarios.
- But as noted by the authors: "the rule-based planner outperforms others in the case of a single-lane roundabout as thereβs no scope for lane change."_
- One addressed problem: "non-stationary" environments.
- A single policy learned on a specific traffic density may perform badly on another density (the dynamic of the world modelled by the
MDP
changes over time). - The goal is to generalize across different traffic scenarios, especially across different traffic densities.
- One idea is to decompose the non-stationary environment into multiple stationary environments, where each mode is an MDP with distinct dynamics: this method is called
Hidden modes
.- How to then switch between modes? The authors proposed to use external information (Google Maps could for instance tell ahead if traffic jams occur on your planned route).
- But as the number of discrete modes increases, the
hidden-mode
method suffers from oscillations at the boundaries of the mode transitions.
- Thus the second idea is to consider one single model: this method is called
Traffic-Conditioned
.- Some continuously varying feature (ratio of
velocity of other vehicles
totarget speed
) is used. It should be representative of the non-stationary environment.
- Some continuously varying feature (ratio of
- One quote about the relation of hidden-mode formulation to hierarchical RL:
-
"For generalization, the hidden-mode formulation can also be viewed as a hierarchical learning problem where one
MDP
/POMDP
framework selects the mode while the other learns the driving behaviour given the mode".
-
- A single policy learned on a specific traffic density may perform badly on another density (the dynamic of the world modelled by the
"Reinforcement Learning Approach to Design Practical Adaptive Control for a Small-Scale Intelligent Vehicle"
-
[
2019
] [π] [ πChongqing University
] -
[
non-deep RL
,online learning
,model-based RL
]
Click to expand
Authors: Hu, B., Li, J., Yang, J., Bai, H., Li, S., & Sun, Y.
- What I like in this paper:
- The authors make sure they understand tabular RL methods, e.g. the difference between on-policy
SARSA(1)
,SARSA(Ξ»)
and off-policyQ-learning
, before going to deep RL.-
"Compared with
Q-learning
, which can be described as greedy, bold, and brave,SARSA(1)
is a conservative algorithm that is sensitive to control errors."
-
- They include a model-based algorithm (
Dyna-Q
) in their study. This seems promising when training directly in real world, where the sampling efficiency is crucial. - They claim RL methods bring advantages compared to PID controllers in term of adaptability (generalization - i.e. some PID parameters appropriate for driving on a straight road may cause issues in sharp turns) and burden of parameter tuning.
- They consider the sampling efficiency (better for model-based) and computational time per step (better for
1
-stepTD
methods than forSARSA(Ξ»)
).-
"
Q-learning
algorithm has a poorer converging speed than theSARSA(Ξ»)
andDyna-Q
algorithm, it balances the performance between the converging speed, the final control behaviour, and the computational complexity."
-
- Obviously, this remains far from real and serious AD. But this paper gives a good example of application of basic RL methods.
- The authors make sure they understand tabular RL methods, e.g. the difference between on-policy
"Learning When to Drive in Intersections by Combining Reinforcement Learning and Model Predictive Control"
-
[
2019
] [π] [ πChalmers University
] [ πZenuity
] -
[
model-free RL
,MPC
,Q-Masking
,POMDP
]
Click to expand
Two figures:
Source Left - Source Right. |
This work applies the path-velocity decomposition and focuses on the longitudinal control. Three intentions are considered: aggressive take-way , cautious (slows down without stopping), and passive give-way . Source. |
Authors: Tram, T., Batkovic, I., Ali, M., & SjΓΆberg, J.
- Main idea: hierarchy in learnt/optimized decision-making.
- A high-level decision module based on RL uses the feedback from the MPC controller in the reward function.
- The MPC controller is also responsible for handling the comfort of passengers in the car by generating a smooth acceleration profile.
- Previous works:
- "Learning Negotiating Behavior Between Cars in Intersections using Deep Q-Learning" - (Tram, Batkovic, Ali, & SjΓΆberg, 2019)
- "Autonomous Driving in Crossings using Reinforcement Learning" - (Jansson & GrΓΆnberg, 2017)
- In particular they reused the concept of "actions as Short Term Goals (
STG
)". e.g. keep set speed or yield for crossing car instead of some numerical acceleration outputs.- This allows for comfort on actuation and safety to be tuned separately, reducing the policy selection to a classification problem.
- The use of such abstracted / high-level decisions could be a first step toward hierarchical RL techniques (
macro-action
andoption
framework).
- Another contribution consists in replacing the Sliding Mode (
SM
) controller used previously by anMPC
, allegedly to "achieve safer actuation by using constraints".- The intention of all agents is implemented with a
SM
controller with various target values.
- The intention of all agents is implemented with a
- I find it valuable to have details about the training phase (no all papers do that). In particular:
- The normalization of the input features in [
-1
,1
]. - The normalization of the reward term in [
-2
,1
]. - The use of equal weights for inputs that describe the state of interchangeable objects.
- Use of a
LSTM
, as an alternative to a DQN with stacked observations. (Findings from (Jansson & GrΓΆnberg, 2017)).
- The normalization of the input features in [
- Additional notes:
- The main benefits of the combination seem to be about avoiding over conservative behaviours while improving the "sampling-efficient" of the model-free RL approach.
- Such approach looks to be particularly relevant (in term of success rate and collision-to-timeout ratio [
CTR
]) for complex scenarios, e.g.2
-crossing scenarios. - For simple cases, the performance stays close to the baseline.
- Such approach looks to be particularly relevant (in term of success rate and collision-to-timeout ratio [
- The reliance (and the burden) on an appropriate parametrisation inherent to rule-based has not disappeared and the generalisation seems limited:
- "Since MPC uses predefined models , e.g. vehicle models and other obstacle prediction models, the performance relies on their accuracy and assumptions ."
- The problem is formulated as a
POMDP
.- Honestly, from a first read, I did not find how
belief tracking
is performed. Maybe something related to the internal memory state of the LSTM cells?
- Honestly, from a first read, I did not find how
- Sadly the simulator seems to be home-made, which makes reproducibility tricky.
- The main benefits of the combination seem to be about avoiding over conservative behaviours while improving the "sampling-efficient" of the model-free RL approach.
- One quote about
Q-masking
, i.e. the technique of not exposing to the agent dangerous or non-applicable actions during the action selection.-
"Q-masking helps the learning process by reducing the exploration space by disabling actions the agent does not need to explore."
- Hence the agent does not have to explore these options, while ensuring a certain level of safety (but this requires another rule-based module π ).
-
"Cooperation-Aware Reinforcement Learning for Merging in Dense Traffic"
-
[
2019
] [π] [] [ π
Stanford
] [ πHonda
] -
[
POMDP
,offline RL
,value-based RL
,interaction-aware decision making
,belief state planning
]
Click to expand
One figure:
Source. |
Authors: Bouton, M., Nakhaei, A., Fujimura, K., & Kochenderfer, M. J.
- One idea: offline belief state RL to solve dense merging scenarios modelled as a POMDP.
- A belief updater explicitly maintains a probability distribution over the driver cooperation levels of other cars.
- Another idea: projection of the merging vehicle on the main lane. This reduces the problem to one dimension, allowing for IDM. Similar to the abstraction presented by R. Regele.
- One term: "C-IDM": Cooperative Intelligent Driver Model.
- It builds on IDM to control the longitudinal acceleration of the other vehicle and adds a cooperation level
c
β
[0
,1
]. - Non-cooperative vehicles (
c=0
) are blind to the merging ego car and follow the vanilla IDM, while cooperative vehicles will yield (c=1
).
- It builds on IDM to control the longitudinal acceleration of the other vehicle and adds a cooperation level
- Another term: "Burn-in time"
- When creating a new configuration, vehicles and their parameters are drawn from distributions and then move according to driver models.
- The idea is between
10s
and20s
before spawning the ego car, to allows the initial state to converge to a more realistic situation. - It should help for generalization.
- Another term: "Curriculum Learning": The idea is to train the agent by gradually increasing the difficulty of the problem. In this case the traffic density.
- Two take-aways (similar to what I identified at IV19)
"Previous works has shown that only relying on deep RL is not sufficient to achieve safety. The deployment of those policies would require the addition of a safety mechanism."
"Using deep reinforcement learning policies to guide the search of a classical planner (MCTS) may be a promising direction."
"Interaction-aware Decision Making with Adaptive Behaviors under Merging Scenarios"
-
[
multi agent RL
,interaction-aware decision making
,curriculum learning
,action masking
]
Click to expand
One figure:
Note: the visibility of each agent is assumed to be 100m in front and back, with 0.5m /cell resolution, for both its current lane (obs_cl ) and the other lane (obs_ol ). Source. |
Authors: Hu, Y., Nakhaei, A., Tomizuka, M., & Fujimura, K.
-
One term: "IDAS": interaction-aware decision making with adaptive strategies.
- The main goal is to generate manoeuvres which are safe but less conservative than rule-based approaches such as IDM and/or FSM.
- The idea is to learn how to negotiate with other drivers, or at least consider interactions in the decision process.
-
One idea: use multi-agent RL (MARL) to consider interactions between the multiple road entities.
- In particular, the agent receives rewards for its personal objective as well as for its contribution to the teamβs "success" (multi-agent credit assignment).
-
One idea: a masking mechanism prevents the agent from exploring states that violate common sense of human judgment (c.f. RSS) and increase the learning efficiency.
- This idea of adding rule-based constraints to a RL policy has been applied in many works. Recently in Wang, J. et al. for instance where prediction is also considered.
- Here, masking is not only based on safety, but also on considers vehicle kinematics and traffic rules.
- A remaining question is where to apply the masking: either before the action selection (exposing only a subset of feasible actions), or after (penalizing the agent if it takes a forbidden action).
-
One quote (on the way to transfer to the real world):
"A motion control module will convert the discretized acceleration of the behaviour planner into continuous acceleration by applying algorithms like MPC at a higher frequency (100Hz)".
"Microscopic Traffic Simulation by Cooperative Multi-agent Deep Reinforcement Learning"
-
[
2019
] [π] [ πUniversity of Parma
] [ πVisLab
] -
[
multi-agent A3C
,off-policy learning
]
Click to expand
One figure:
Source. |
Authors: Bacchiani, G., Molinari, D., & Patander, M.
- One idea: "parallel actor-learners": to decorrelate the sequence of experiences used to update the policy network, a common approach is to sample <
s
,a
,r
,s'
> tuples from a memory buffer (experience replay).- Here, a multiple-agent setting is used instead: each agent acts in a different instance of the environment (hence diverse experiences) and sends its updates asynchronously to a central network.
- Another idea: "hybrid state representation": coupling some grid-like representation (
path to follow
,obstacles
,navigable space
) with a vector of explicit features (e.g.target speed
,distance to goal
,elapsed time ratio
).- This combination offers a generic scene representation (i.e. independent of the number of vehicles) while allowing for tuning explicit goals (
target speeds
) in the state. - Such hybrid representation seems popular, as identified at IV19).
- This combination offers a generic scene representation (i.e. independent of the number of vehicles) while allowing for tuning explicit goals (
- Other ideas:
- Stacking the
n=4
most recent views to capture the evolution of the scene (e.g. relative speeds). - Action repeat technique for temporal abstraction to stabilize the learning process (c.f. "frame skip").
- Stacking the
- One concept: "Aggressiveness tuning". Together with the
target speed
, theelapsed time ratio
(ETR) feature is used to tune the aggressiveness of the car:
"ETR Values close to
1
will induce the agent to drive faster, in order to avoid the predicted negative return for running out of time. Values close to0
will tell the driver that it still has much time, and it is not a problem to yield to other vehicles."
"Dynamic Input for Deep Reinforcement Learning in Autonomous Driving"
-
[
2019
] [π] [ποΈ] [ πUniversity of Freiburg
] [ πBMW
] -
[
feature engineering
,off-policy learning
,DQN
,SUMO
]
Click to expand
- One diagram is better than 100 words:
By summing all the dynamic terms (one per surrounding vehicle), the input keeps a constant size. Source. |
Authors: Huegle, M., Kalweit, G., Mirchevska, B., Werling, M., & Boedecker, J.
- One goal: find a flexible and permutation-invariant representation to deal with variable sized inputs (variable number of surrounding vehicles) in high-level decision making in autonomous lane changes, using model-free RL.
- Four representations are considered:
- Relational Grid (fixed-size vector)
- Occupancy grid (fixed-size grid processed by some
CNN
) - Set2Set-Q (some
RNN
combined with an attention mechanism to create a set representation which is permutation invariant w.r.t. the input elements) - DeepSet-Q (proposed approach)
- They are used as inputs to train and evaluate DQN agents.
- Four representations are considered:
- One finding:
- "
Deep Sets
were able to outperform CNN and recurrent attention approaches and demonstrated better generalization to unseen scenarios".
- "
- One quote: about the use of policy-based (as opposed to value-based), on-policy (as opposed to off-policy), model-free RL (here
PPO
).
"Due to the higher demand for training and the non-trivial application to autonomous driving tasks of
on-policy
algorithms, we switched tooff-policy
Q-learning."
"Seeking for Robustness in Reinforcement Learningβ―: Application on Carla Simulator"
Click to expand
- Some background:
n -step TD learning. Source. |
Authors: JaΓ’fra, Y., Laurent, J.-L., Deruyver, A., & Naceur, M. S.
- One related work: reading this paper reminded me of one conclusion of the 2019 CARLA AD Challenge:
"Robust open-source AV stacks are not a commodity yet: No public AV stack has solved the challenge yet."
- One idea: use an actor-critic architecture with multi-step returns (
n
-stepA2C
) to "achieve a better robustness".- The introduction of a critic aims at reducing the variance of the gradient of policy-based methods.
- As illustrated in the above figure, in value-based methods, the TD-target of a critic can be computed in several ways:
- With bootsrapped, using the current estimate for the next state
s'
:1
-step TD - low variance but biased estimate ... - ... Up to considering all the steps in the trajectory until termination:
Monte Carlo
- high variance but unbiased estimate. - In between are multi-step returns critics (
MSRC
). Obviously a trade-off between bias/variance.
- With bootsrapped, using the current estimate for the next state
- Some limitations:
- The
MDP
is not very detailed, making reproduction and comparison impossible.- For instance, the
action
space allegedly contains3
discrete driving instructions [steering
,throttle
, andbrake
], but not concrete number is given. - The same applies to the
state
andreward
function: no quantitative description. - Based on their text, I can assume the authors were participating to the 2019 CARLA AD challenge. Maybe track
1
. But again, information about the town/scenario is missing.
- For instance, the
- No real comparison is performed: it should be feasible to use the built-it rule-based agent present in CARLA as a baseline.
- Why not also supplying a video of the resulting agent?
- The
- One additional work: Meta-Reinforcement Learning for Adaptive Autonomous Driving, (Jaafra, Luc, Aline, & Mohamed, 2019) [pdf] [poster].
- This idea is to use multi-task learning to improve generalization capabilities for an AD controller.
- As detailed, "Meta-learning refers to learn-to-learn approaches that aim at training a model on a set of different but linked tasks and subsequently generalize to new cases using few additional examples".
- In other words, the goal is to find an optimal initialization of parameters, to then quickly adapt to a new task through a few standard gradient descents(few-shot generalization).
- A gradient-based meta-learner inspired from Model-Agnostic Meta-Learning (
MAML
- Finn et al., 2017) is used. - RL performance in non-stationary environments and generalisation in AD are interesting topics. But no clear benefit is demonstrated, and the above limitations apply also here.
"High-level Decision Making for Safe and Reasonable Autonomous Lane Changing using Reinforcement Learning"
-
[
2018
] [π] [ πUniversity of Freiburg
] [ πBMW
] -
[
Q-Masking
,RSS
]
Click to expand
One figure:
Source. |
Authors: Mirchevska, B., Pek, C., Werling, M., Althoff, M., & Boedecker, J.
- It relates to the
RSS
andQ-masking
principles.- The learning-based algorithm (
DQN
) is combined with arule-based
checker to ensure that only safe actions are chosen at any time. - A
Safe Free Space
is introduced.- For instance, the agent must keep a safe distance from other vehicles so that it can stop without colliding.
- The learning-based algorithm (
- What if the
Safe Free Space
is empty?-
"If the action is considered safe, it is executed; if not, we take the second-best action. If that one is also unsafe, we stay in the current lane."
-
- About the
PELOPS
simulator:- It has been developed between [ π
fka
(ZF
) ] and [ πBMW
]. - In future works (see above), they switch to an open source simulator:
SUMO
.
- It has been developed between [ π
"Safe Reinforcement Learning on Autonomous Vehicles"
Click to expand
The prediction module masks undesired actions at each time step. Source. |
Here a related patent from the authors π π. Source. |
Authors: Isele, D., Nakhaei, A., Fujimura, K.
- Previous works about learning control with
DQN
s at diverse intersections:- "Navigating occluded intersections with autonomous vehicles using deep reinforcement learning" - (Isele et al., 2017)
- "Transferring Autonomous Driving Knowledge on Simulated and Real Intersections" - (Isele et al., 2017)
- How to make model-free RL "safe"? Two options are mentioned (both required expert knowledge):
- 1- Modifying the reward function (requires careful tuning).
- 2- Constraining exploration (e.g.
action masking
,action shielding
).-
"The methods can completely forbid undesirable states and are usually accompanied by formal guarantees".
- In addition, the learning efficiency can be increased (fewer states to explore).
-
- For additional contents on "Safe-RL in the context of autonomous vehicles", one could read this github project by @AliBaheri.
- Here: The
DQN
is augmented with some action-masking mechanism.- More precisely, a prediction model is used:
- The predicted position of the ego car is compared against the predicted position of all other traffic cars (called
forward predictions
). - If an overlap of the regions is detected, the action is marked as unsafe.
- Else, the agent is allowed to freely explore the safe state space, using traditional model-free RL techniques.
- The predicted position of the ego car is compared against the predicted position of all other traffic cars (called
- Note: this illustrates the strong relation between
prediction
andrisk assessment
.
- More precisely, a prediction model is used:
- One challenge: Ensuring "safety" not just for the next step, but for the whole trajectory.
-
"To ensure that the agent never takes an unsafe action, we must check not only that a given action will not cause the agent to transition to an unsafe state in the next time step, but also that the action will not force the agent into an unsafe state at some point in the future."
-
"Note that this is closely related to the credit assignment problem, but the risk must be assigned prior to acting".
- This made me think of tree search techniques, where a path is explored until its terminal node.
- To cope with the exponential complexity, the authors proposed some approximation to restrict the exploration space.
- One of them being the
temporal abstraction
for actions (see this video series for a quick introduction). - The idea of this so-called
option
orintentions
framework, is to distinguish betweenlow-level
andhigh-level
actions-
"This can be thought of as selecting an open-loop high-level decision followed by subsequent bounded closed-loop low-level corrections."
- For a given time horizon, the trajectories described with these options are way shorter, hence reducing the size of the state set that is to be checked.
-
- This leads to the definition of a functional local-state (I did not understand all the details) including some variance term:
-
"The variance acts as a bound that encompasses the variety of low-level actions that produce similar high-level actions. Additionally, we will use the variance to create safety bounds".
-
- One of them being the
-
- One remark: similar to the "collision checker" for path planning, I can imagine that this prediction module becomes the computational bottleneck of the framework.
"Automating Vehicles by Deep Reinforcement Learning Using Task Separation with Hill Climbing"
-
[
2017
] [π] [ πIMT Lucca
] -
[
stochastic policy search
,gradient-free RL
,policy-gradient RL
,reward shaping
]
Click to expand
Source. |
Author: Plessen, M. G.
-
One remark: to be honest, I find this publication not very easy to understand. But it raises important questions. Here are some take-aways.
-
One term:
(TSHC)
= Task Separation with Hill Climbing- Hill Climbing has nothing to do with the gym MountainCar env.
- It rather refers to as a gradient-free optimization method: the parameters are updated based on greedy local search.
- For several reasons, the author claims derivative-free optimization methods are simpler and more appropriate for his problem, compared to policy-gradient RL optimization methods such as
PPO
andDDPG
where tuned-parameters are numerous and sparse rewards are propagating very slowly.
- The idea of Task Separation concerns the main objective of the training phase: "encode many desired motion primitives (training tasks) in a neural network", hoping for generalisation when exposed to new tasks.
- It is said to serve for exploration in optimization: each task leads to a possible region with locally optimal solution, and the best solution among all identified locally optimal solutions is selected.
- Hill Climbing has nothing to do with the gym MountainCar env.
-
One concept:
sparse reward
.- Reward shaping is an important problem when formulation the decision-making problem for autonomous driving using a (PO)MDP.
- The reward signal is the main signal used for the agent to update its policy. But if it only receives positive reward when reaching the goal state (i.e. sparse reward), two issues appear:
- First, it will take random actions until, by chance, it gets some non-zero reward. Depending on how long it takes to get these non-zero rewards, it might take the agent extremely long to learn anything.
- Secondly, because nonzero rewards are seen so rarely, the sequence of actions that resulted in the reward might be very long, and it is not clear which of those actions were really useful in getting the reward. This problem is known as credit assignment in RL. (Explanations are from here).
- Two options are considered in this work:
- "Rich reward signals", where a feedback is provided at every time step (
r
becomes function oft
). - "Curriculum learning", where the learning agent is first provided with simpler examples before gradually increasing complexity.
- "Rich reward signals", where a feedback is provided at every time step (
- After trials, the author claims that no consistent improvement could be observed with these two techniques, adding that the design of both rich reward signals and "simple examples" for curriculum learning are problematic.
- He rather kept working with sparse rewards (maximal sparse rewards), but introduced some "virtual velocity constraints" to speed up the training.
-
I like the points he made concerning feature selection for the state, i.e. how to design the state
s(t)
of the MDP.- He notes that
s(t)
must always relate the current vehicle state with reference to a goal state.- In other words, one should use relative features for the description of the position and velocity, relative to their targets.
- In addition,
s(t)
should also consider the past and embed a collection of multiple past time measurements.- It seems sounds. But this would indicate that the "Markov property" in the MDP formulation does not hold.
- He notes that
"Automatic learning of cyclistβs compliance for speed advice at intersections - a reinforcement learning-based approach"
-
[
2019
] [π] [ πDelft University
] -
[
Dyna-2
,Dyna-Q
]
Click to expand
The proposed algorithm learns the cyclistβs behaviour in reaction to the advised speed . It is used to make prediction about the next state, allowing for a search that help to plan the best next move of the cyclist on-the-fly. A look-up table is used to model F . Source. |
Authors: Dabiri, A., Hegyi, A., & Hoogendoorn, S.
- Motivation:
1-
Advise a cyclist what speed to adopt when approaching traffic lights with uncertainty in the timing.- To me, it looks like the opposite of numerous works that control traffic lights, assuming behaviours of vehicles, in order to optimize the traffic flow. Here, it may be worth for cyclists to speed up to catch a green light and avoid stopping.
- Note that this is not a global optimization for a group of cyclists (e.g. on crossing lanes). Only one single cyclist is considered.
- Note that the so-called "
agent
" is not the cyclist, but rather the module that provides the cyclist a speed advice.
2-
Do not assume full compliance of the cyclist to the given advice, i.e. take into account the effect of disregarding the advice.
- Challenges:
1-
There is no advanced knowledge on how the cyclist may react to the advice he/she receives.- The other dynamics (or transition) models (deterministic kinematics of the bike and stochastic evolution of the traffic light state) are assumed to be known.
2-
The computation time available at each decision step is limited: we cannot afford to wait fornext-state
to be known before starting to "search".
- Main ideas:
- Learn a model of the reaction of cyclist to the advice (using a
look-up table
), on real-time (it seemscontinuous learning
to me). - Use a second search procedure to obtain a local approximation of the action-value function, i.e. to help the agent to select its next action.
- Hence:
-
"Combine learning and planning to decide of the
speed
of a cyclist at an intersection".
-
- Learn a model of the reaction of cyclist to the advice (using a
- One strong inspiration:
Dyna-2
(Silver & Sutton, 2007).-
"The value function is a linear combination of the transient and permanent memories, such that the transient memory tracks a local correction to the permanent memory".
- Without transient memory, it reduces to
linear Sarsa
. - Without permanent memory, it reduces to a sample-based search algorithm.
-
- One idea: use
2
search procedures:-
"Similar to
Dyna-2
,Dyna-c
[c
forcyclist
], learns from the past and the future:" 1-
Search I
: The long-term action-value is updated from what has happened in real world.Q
(s
,a
), which is updated from real experience.- This long-term memory is used to represent general knowledge about the domain.
Search I
can benefit from a local approximation provided bySearch II
. How? is I a real search or just argmax()?
2-
Search II
: The short-term action-value is updated from what could happen in the future.QΒ―
(s
,a
), which uses simulated experience for its update and focuses on generating a local approximation of the action-value function.- Based on the learnt model and the selected action, the agent predicts the state in the next time step.
- It can simulate experiences (search procedure) that start from this "imagined" state and update
QΒ―
accordingly.
-
- Difference with
dyna-q
. Time constrain: we can neither afford to wait for the next observation nor to take too long to think after observing it (as opposed to e.g. GO).Search II
has exactly one timestep to perform its searches:-
"Just after the action is taken and before reaching to the next time step, the agent has
Ts
=βt
seconds to performSearch II
."
-
- One take-away:
-
"Proper initialisation of
Q
can significantly improve the performance of the algorithm [I note the logically equivalent contrapositive]; the closer the algorithm starts to the real optimal action-value, the better." -
"Here,
Q
is initialised with its optimal value in case of full compliance of the cyclist [next-observed speed
=
advised speed
]. Stochastic Dynamic Programming (SDP
) is used for such initialisation."
-
"ReQueST: Learning Human Objectives by Evaluating Hypothetical Behavior"
-
[
2019
] [π] [ποΈ] [] [ π
UC Berkeley
] [ πDeepMind
] -
[
safe exploration
,reward learning
]
Click to expand
Right: Procedure to learn the hidden reward function: Using an offline-learnt generative model , query trajectories are produced for each acquisition function (AF ). Transitions of these trajectories are labelled by the user. The reward model ensemble is retrained on the updated training data using maximum-likelihood estimation. Source. |
Four acquisition functions: Maximize predicted rewards makes the car drive fast and far. Maximize reward model uncertainty makes the car drive close to the border. Minimize predicted rewards makes the car drives off-road. Maximize the novelty of training data makes the car stay still (since most training examples show cars in motion). Animated figure here. |
Authors: Reddy, S., Dragan, A. D., Levine, S., Legg, S., & Leike, J.
- One quote:
-
"We align agent behavior with a userβs objectives by learning a model of the userβs reward function and training the agent via (
model-based
)RL
."
-
- One term: "reward query synthesis via trajectory optimization" (
ReQueST
)synthesis
:- The model first learns a generative model, i.e. a transition or forward dynamics function.
- It is trained using off-policy data and maximum-likelihood estimation, i.e. unsupervised learning.
- It is used to produce synthetic trajectories (instead of using the default training environment).
- Note: building a forward dynamics model for cars in interactive environments_ looks very challenging.
reward query
:- The user labels each transition in the synthetic trajectories based on some reward function (unknown to the agent).
- Based on these signals, the agent learns a reward model
r
(s
,a
,s'
), i.e. unsupervised learning. - The task can be regression or classification, for instance:
good
- the car drives onto a new patch of road.unsafe
- off-road.neutral
- in a previously-visited road patch.
-
"We use an ensemble method to model uncertainty."
trajectory optimization
:- Once the reward model has converged, a model-based
RL
agent that optimizes the learned rewards is deployed. - It combines planning with model-predictive control (
MPC
).
- Once the reward model has converged, a model-based
- One concept: "acquisition function" (
AF
).- It answers the question: how to generate "useful" query trajectories?
- One option is to sample random trajectories from the learnt generative model.
-
"The user knows the rewards and unsafe states, but querying the user is expensive." So it has to be done efficiently.
- To generate useful queries, trajectories are synthesized so as to maximize so-called "acquisition functions" (
AF
).
- The authors explain (I did not understand everything) that these
FA
serve (but not all) as proxy for the "value of information" (VOI
):-
"The
AF
evaluates how useful it would be to elicit reward labels for trajectoryΟ
".
-
- The maximization of each of the
4
FA
is intended to produce different types of hypothetical behaviours, and get more diverse training data and a more accurate reward model:1-
Maximize reward model uncertainty.- It is based on
ensemble disagreement
, i.e. generation of trajectories that maximize the disagreement between ensemble members. - The car is found to drive to the edge of the road and slowing down.
- It is based on
2-
Maximize predicted rewards.- The agent tries to act optimally when trying to maximize this term.
- It should detect when the reward model incorrectly outputs high rewards (
reward hacking
).
3-
Minimizes predicted rewards.-
"Reward-minimizing queries elicit labels for unsafe states, which are rare in the training environment unless you explicitly seek them out."
- The car is going off-road as quickly as possible.
-
4-
Maximize the novelty of training data.- It produces novel trajectories that differ from those already in the training data, regardless of their predicted reward.
-
"The car is staying still, which makes sense since the training data tends to contain mostly trajectories of the car in motion."
- More precisely, the trajectory generation targets two objectives (balanced with some regularization constant):
1-
Produce informative queries, i.e. maximize theAFs
.2-
Produce realistic queries, i.e. maximize the probability of the generative model (staying on the distribution of states in the training environment).
- It answers the question: how to generate "useful" query trajectories?
- About safe exploration.
- Via
AF-3
, the reward model learns to detect unsafe states. -
"One of the benefits of our method is that, since it learns from synthetic trajectories instead of real trajectories, it only has to imagine visiting unsafe states, instead of actually visiting them."
- In addition (to decide when the model has learnt enough), the user observes query trajectories, which reveals what the reward model has learned.
- Via
"Semantic predictive control for explainable and efficient policy learning"
-
[
2019
] [π] [ποΈ] [] [ π
UC Berkeley (DeepDrive Center), Shanghai Jiao Tong University, Nanjing University
] -
[
MPC
,interpretability
,CARLA
]
Click to expand
SPC , inspired from MPC , is composed of one semantic feature extractor, one semantic and event predictor, and one guide for action selection. Source. |
Authors: Pan, X., Chen, X., Cai, Q., Canny, J., & Yu, F.
- Motivations:
1-
Sample efficiency.2-
Interpretability.- Limitation of
behavioural cloning
methods:-
"Direct imitative behaviors do not consider future consequences of actions explicitly. [...] These models are reactive and the methods do not incorporate reinforcement or prediction signals."
-
- Limitations of
model-free RL
methods:-
"To train a reliable policy, an
RL
agent requires orders of magnitude more training data than a human does for the same task." -
"An unexplainable
RL
policy is undesirable as a single bad decision can lead to a severe consequence without forewarning."
-
- One term: "Semantic Predictive Control" (
SPC
).- It is inspired by Model Predictive Control (
MPC
) in that it seeks an optimal action sequence over a finite horizon and only executes the first action. - "Semantic" because the idea it to try to predict future semantic maps, conditionned on action sequences and current observation.
SPN
is trained on rollout data sampled online in the environment.
- It is inspired by Model Predictive Control (
- Structure:
1-
Semantic estimation.- Multi-scale intermediate features are extracted from
RGB
observations, using "Deep Layer Aggregation" (DLA
), a special type of skip connections. - As noted:
-
"Using semantic segmentation as a latent state representation helps to improve data efficiency."
-
- This multi-scale feature representation is passed together with the planned action into the prediction module to iteratively produce future feature maps.
- Multi-scale intermediate features are extracted from
2-
Representation prediction.- What is predicted?
2.1-
The future scene segmentation2.2-
Some task-dependent variables (seen as "future events") conditioned on current observation and action sequence. This can include:Collision
signal (binary).Off-road
signal (binary).Single-step travel distance
(scalar).Speed
(scalar).Driving angle
(scalar).- Note: in their POC with flappy bird, authors also predicted the discounted sum of rewards.
- What is predicted?
3-
Action sampling guidance.- How to select actions?
3.1-
One possible solution is to perform gradient descent to optimize an action sequence.3.2-
Another solution is to perform a grid search on the action space, and select the one with the smallest cost.3.3-
Instead, the authors propose to use the result of theSMP
:-
"
SPN
outputs an action guidance distribution given a state input, indicating a coarse action probability distribution". - Then, they sample multiple action sequences according to this action guidance distribution, then evaluates their costs, and finally pick the best one.
-
- How to select actions?
"Vision-Based Autonomous Drivingβ―: A Model Learning Approach"
-
[
2019
] [π] [ πUniversity of Michigan
] [ πFord
] -
[
VAE
,stochastic policy search
,CARLA
]
Click to expand
One figure:
The perception module, the memory or prediction module, and the control module. Source. |
Authors: Baheri, A., Kolmanovsky, I., Girard, A., Tseng, E., & Filev, D.
- The idea is to first learn a model of the environment (the
transition function
of theMDP
) and subsequently derive a policy based on it. - Three modules are used:
- 1- A
VAE
is trained to encode front camera views into an abstract latent representation. - 2- A
LSTM
is trained to predict the latent representation of the one time-step ahead frame, given the action taken and the current state representation. Based on this prediction (mean
andstd
), a next state representation is sampled using the VAE. - 3- A
CMA-ES
is trained to take actions (steering
,acceleration
, andbrake
) based on theLSTM
hidden state (capturing history information) and the current state representation (predicted). The problem is formulated as anMDP
.
- 1- A
- One idea about the continuous action space:
-
"We combine the acceleration and brake commands into a single value between
β1
to+1
, where the values betweenβ1
and0
correspond to the brake command and the values between0
and1
correspond to the acceleration command". - The authors use the term "acceleration command" for one of the actions. CARLA works with
throttle
, as human use the gas-pedal. - I have realized that the mapping
acceleration
->
throttle
is very complex. Therefore I think the agent is learning thethrottle
and considering the single NN layer used for the controller, this may be quite challenging.
-
- About the
CMA-ES
:ES
means "Evolution Strategy", i.e. an optimization technique based on ideas of evolution, iterating between ofvariation
(viarecombination
andmutation
) andselection
.ES
is easy to implement, easy to scale, very fast if parallelized and extremely simple.
CMA
means "Covariance Matrix Adaptation".- This means that in the
variation
phase, not only themean
but also thecovariance matrix
of the population is updated to increase the probability of previously successful steps. - Therefore, it can be seen as Cross-Entropy Methods (
CEM
) with momentum.
- This means that in the
- About sampling efficiency:
- The authors note that
IL
andmodel-free RL
baselines were taking resp.14
hours and12
days of driving for training and were both outperformed by the presentedmodel-based RL
approach which required5
hours of human driving.- This only considers the time to interact with the environment, i.e. to record images.
- It would be interesting to consider the time needed to learn the policy afterward.
CMA-ES
, as a derivative-free method, is one of the least sample efficient approach.- I find interesting that an evolutionary algorithm was chosen given the motivation of increasing sampling efficiency.
- The authors note that
- About
model-based
RL:- The performance really depends on the ability to learn a reliable model of the environment.
- The low-level representation of the
VAE
(size128
) may not capture the most difficult situations. - The authors suggest looking at mid-level representations such as the affordance representation of DeepDriving instead.
- The low-level representation of the
- Here, the authors strictly split the two tasks: First learn a model. Then do planning.
- Why not keeping interacting from time to time with the
env
, in order to vary the sources of experience?- This should still be more sample efficient than model-free approaches while making sure the agent keep seeing "correct" transitions.
- The performance really depends on the ability to learn a reliable model of the environment.
"Visionβbased control in the open racing car simulator with deep and reinforcement learning"
Click to expand
Some figures:
First extract some variables - e.g. curvature , desired speed , lateral offset , offset in heading - from images using supervised learning and then apply control learnt with model-based RL . Source. |
The model-based PILCO algorithm is used to quickly learn to predict the desired speed. Source. |
Authors: Zhu, Y., & Zhao, D.
- Definitions:
- State variables:
x
= [lateral deviation
,angle deviation
,desired speed
]. - Dynamical variables:
y
= [x
,curvature
]. - Cost variables
z
= [y
,current speed
]. - Control variables:
u
= [steering
,throttle or brake
]. - The variable
current speed
is always known: either given byTORCS
or read fromCAN bus
.
- State variables:
- One idea: contrary to
E2E
, the authors want to separateperception
andcontrol
. Hence the training is divided into two steps:- 1- Extract dynamical variables
y
from the simulator (assume full observation) and learn a driving controller. -> Using model-based RL. - 2- Try to extract
y
from images. -> Using supervised learning. - This step-by-step method brings advantages such as the possibility for intermediate checks and uncertainty propagation.
- But both learning processes are isolated. And one defective block can cause the whole chain to fail.
- In particular, the authors note that the
CNN
fails at predicting0
-lateral-offset, i.e. when the car is close to the centre, causing the full system to "vibrate". - This could be addressed on the controller side (damping factor or adding action consistency in the cost function), but it would be better to back-propagate these errors directly to the perception, as in pixel-to-control approaches.
- 1- Extract dynamical variables
- What is learnt by the controller?
- One option would be to learn the transition function leading to the new state:
x[t+1]
=f
(y
,u
,x
). This is what the simulator applies internally. - Instead, here, the distribution of the change in state is learnt:
delta
(x
) =x[t+1]
-x[t]
=f
(y
,u
,x
). - Data is collected through interactions and used to optimize the parameters of the controller:
- Training inputs are formed by some recorded
Y
= [y
,u
]. - Training targets are built with some recorded
ΞX
= [delta
(x
)].
- Training inputs are formed by some recorded
- One option would be to learn the transition function leading to the new state:
- Another idea: the car is expected to run at different velocities.
- Hence vary the desired speed depending on the curvature, the current velocity and the deviation in
heading
. - This is what the agent must learn to predict.
- In the reward function of
PILCO
, the term about desired velocity play the largest role (if you do not learn to decelerate before a turn, your experiences will always be limited since you will get off-road at each sharp turn).
- Hence vary the desired speed depending on the curvature, the current velocity and the deviation in
- One algorithm:
PILCO
=P
robabilisticI
nference forL
earningCO
ntrol.- In short, this is a model-based
RL
algorithm where the system dynamics is modelled using a Gaussian process (GP
). - The
GP
predicts outcome distribution ofdelta
(x
) with probabilities. Hence first letterP
.- In particular, the job is to predict the mean and the standard deviation of this distribution which is assumed to be Gaussian.
- This probabilistic nature is important since model-based
RL
usually suffers from model bias.
- The cost variables are also predicted and based on this
z
distribution, the optimal controlu
is derived using policy gradient search (PGS
).- More precisely, the control variables
u
is assumed to be function of the expected costz
via an affine transformation followed by some saturation:u
=sat
(w
*z
+b
). - Hence
PGS
aims at finding {w
,b
}: the predicted return and its derivatives are used to optimize the controller parameters. - The new controller is again used in
TORCS
to generate data and the learning process is repeated.
- More precisely, the control variables
- In short, this is a model-based
- Why and how is the vanilla
PILCO
modified?- The computational complexity of
PILCO
is linear in the size of training set.- Instead of using sparse GP method (e.g.
FITC
), the authors decide to prune the dataset instead. - In particular, observation data are sparsely collected and renewed at each iteration.
- Instead of using sparse GP method (e.g.
- Other modifications relate to the difference between input output/variable types. And the use of different scenarios to calculate the expected returns.
- The computational complexity of
- One quote about the difference between
PILCO
andMPC
:-
"The concept of PILCO is quite similar to explicit MPC algorithms, but MPC controllers are usually defined piecewise affine. For PILCO, control law can be represented in any differentiable form."
-
"Efficient Uncertainty-aware Decision-making for Autonomous Vehicles Using Guided Branching"
-
[
2020
] [π] [] [ π
Hong Kong University
] -
[
guided branching
]
Click to expand
The Domain-specific Closed-loop Policy Tree (DCP-Tree ) provides a guided branching mechanism in the action space. For efficiency, this semantic-level policy tree is updated based on the previous best policy. Each node in the policy tree is a finite-horizon semantic behaviour of the ego car. From the ongoing action, each policy sequence will contain at most one change of action in one planning cycle. Compared to MPDM , this offers the possibility to change decision in the planning horizon while, as humans, not frequently changing the driving policy back and forth in a single decision cycle. Source. |
Decision-making using domain-specific expert knowledge to guide branching, inspired by MPDM . Branching in the action domain is guided by the DCP -Tree. Branching in the intention domain is done by the CFB mechanism to pick out risky hidden intentions of nearby vehicles. Source. |
Authors: Zhang, L., Ding, W., Chen, J., & Shen, S.
- Motivations: Propose a simpler alternative to existing online
POMDP
solvers (DESPOT
,ABT
,POMCP
), with focus on efficiency.-
"The key idea is utilizing domain-specific expert knowledge to guide the branching in both
(1) action
and(2) intention
space." - The goal is to consider as few branches as possible. The most critical ones, potentially leading to risky outcomes.
-
- Two related works:
- "Multipolicy Decision-Making for AutonomousDriving via Changepoint-based Behavior Prediction"
- "MPDM: Multipolicy decision-making in dynamic, uncertain environments for autonomous driving"
MPDM
is extended here and used as a benchmark.-
"
MPDM
approximates thePOMDP
process into the(1)
closed-loop simulation of predefined(2)
semantic-level driving policies (e.g.,LC
,LK
, etc.). The incorporation of domain knowledge greatly accelerates the problem-solving."(1)
The exploration of the state space is guided by simple closed-loop controllers, i.e. domain knowledge.(2)
Semantic-level policies instead of traditional "state
"-level actions such as discretized accelerations or velocities
-
"One major limitation of
MPDM
is that the semantic-level policy of the ego vehicle is not allowed to change in the planning horizon."
- Differences with the original
MPDM
:1-
The policy of the ego vehicle is allowed to change in the planning horizon according to theDCP-Tree
.2-
Focused branching is applied to pick out the risky scenarios, even given totally uncertain behaviour prediction, which enhances the safety of the framework.-
"The
CFB
mechanism is applied to pick out risky hidden intentions of nearby vehicles and achieves guided branching in intention space."
-
- About formulation:
- [hidden part of the state] The intention about lateral behaviours in {
LK
,LCL
,LCR
}. - [belief tracker] A "rule-based lightweight belief tracking module" generates a probability distribution over these intentions. No much details about this module.
- [transition model] Two driver models:
intelligent driving model
andpure pursuit controller
. - [action] Longitudinal {} and lateral semantic-level decisions.
- [hidden part of the state] The intention about lateral behaviours in {
- Some terms:
MPDM
= multipolicy decision-making.EUDM
= uncertainty-aware decision-making.DCP-Tree
= domain-specific closed-loop policy tree.CFB
= conditional focused branching.
"Autonomous Driving at Intersections: A Critical-Turning-Point Approach for Left Turns"
-
[
2020
] [π] [ πUniversity of Waterloo
,Tsinghua University
] [ πTencent
] -
[
intention-aware motion planning
,ABT
]
Click to expand
Source. |
The ego vehicle updates its beliefs on the route -intention of the oncoming vehicle. At start, actions share the same pattern. But when the left-turn intention becomes highly likely, an acceleration action is performed (up) while a braking action is preferred in the straight -intention case since the future path is blocked by the oncoming traffic. Source. |
Authors: Shu, K., Yu, H., Chen, X., Chen, L., Wang, Q., Li, L., & Cao, D.
- Motivations:
- Replicate human-like efficient behaviour for a left-turn at an unsignalized intersection, trading off between safety and conservatism, without explicit hand-written rules:
- "Before merging into the intersection, the ego vehicle drives into the intersection with high speed. Then it decelerates to a lower speed and creeps forward, considering the potential of collision with the oncoming vehicle while waiting for the oncoming vehicleβs intention to become certain. The ego vehicle then performs more confident actions on a proper route when the intention of the oncoming vehicles becomes clear."
- Replicate human-like efficient behaviour for a left-turn at an unsignalized intersection, trading off between safety and conservatism, without explicit hand-written rules:
- Architecture:
1-
[high-level] Paths generation usingCTP
s.2-
[low-level] Path selection and speed planning using aPOMDP
.
- One term: "critical turning point" (
CTP
).-
"The starting points where the vehicle makes hard steering (sharp turning) are identified and extracted as 'turn points'."
- They are computed based on the critical zone extraction (
CZE
) which is generated from road geometry. - Path candidates are then generated from the
CTP
and sent to the lower-level planner for path selection and speed planning. - [Benefits of
CTP
s]-
"Our candidate paths planned with
CTP
gives the ego vehicle an option to keep moving forward to reach the nextCTP
when one of the paths is blocked by the oncoming traffic." -
"The proposed method spends about
1.5s
less time to pass through the intersection than the one that does not useCTP
s." -
"When the oncoming vehicle is driving with a higher speed, the shortest and the most aggressive path is chosen since the waiting time is shorter."
-
-
- About the
POMDP
formulation:- [hidden part of state] Oncoming vehicles have unknown intentions: either
straight
orleft-turn
.- The other vehicle is assume to ignore the ego-car and the uncertainty is about the chosen route.
- Other approaches include the yield reaction of the other's, e.g. {
stopping
,hesitation
,normal
,aggressive
}.
- [observation] speeds and positions - I think an
orientation
would also be useful to infer which route is taken + give a sense to thespeed
scalar. - [action] It is
2
-dimensional:1-
(speed planning) - Theego-acceleration
along the current path in [-4 m/s2
,4 m/s2
] with a step of1 m/s2
.2-
(path selection) - Aleft-turn
Boolean variable which "conveys sharp turn instructions". - I understand it as a change in selected path
- [transition model] All vehicles are assumed to move at constant speed on predefined routes.
- [solver] Adaptive Belief Tree (
ABT
).- "Instead of trimming the entire policy tree after an action is selected, the
ABT
algorithm only modifies parts of the tree that are influenced by the updated belief after executing the selected action".
- "Instead of trimming the entire policy tree after an action is selected, the
- [hidden part of state] Oncoming vehicles have unknown intentions: either
"Integrating Planning and Interpretable Goal Recognition for Autonomous Driving"
-
[
2020
] [π] [ πUniversity of Edinburgh
] [ πFiveAI
] -
[
goal recognition
,intention-aware motion planning
,inverse planning
,MCTS
]
Click to expand
Example of scenario (note: left-hand drive) where prediction based on goal-recognition can inform the planning. It enables a less conservative behaviour (entering the intersection earlier) while offering interpretability. Source. |
The main ideas are to couple prediction and planning , try to infer the goals followed by the other vehicles and use high-level abstraction of manoeuvres via macro actions . Source. |
The ego-agent updates its belief on the goal follow by the other vehicles (left). As noted below, the ablation study (right) raises question about what really offers benefits for the time-efficiency of the driving policy. Source. |
Authors: Albrecht, S. V., Brewitt, C., Wilhelm, J., Eiras, F., Dobre, M., & Ramamoorthy, S.
- Motivations:
1-
Improve the anticipation ability and hence the efficiency of driving behaviour at urban intersections.2-
Provide intuitive interpretations of the predictions to justify the ego-agent's decisions.3-
Keep computational efficiency low.
- The main ingredients to achieve that are:
- Couple
planning
andprediction
. - Infer the goals (here specifying target locations) of other vehicles.
- Use of high-level manoeuvres and macro actions.
- Couple
- Two assumptions:
1-
Each vehicle seeks to reach some (unknown) goal location from a set of possible goals, and behaves rationally by driving optimally to achieve goals.- Note: for that second point, the definition of the likelihood
p
(trajectory | goal-i
) still allows for a degree of deviation.
- Note: for that second point, the definition of the likelihood
2-
At any time, each vehicle is executing one ofmanoeuvre
among a finite set:lane-follow
lane-change-left
/right
turn-left
/right
give-way
stop
- About goad recognition:
- The idea is to recognise the goals of other vehicles, in order to perform rational inverse planning, i.e. made better (better informed) decisions.
-
"We must reason about why β that is, to what end β the vehicle performed its current and past manoeuvres, which will yield clues as to its intended goal."
-
"[Limitation of the optimality assumption] An important future direction is to account for human irrational biases."
- How to plan ego-action? By leveraging recognition and prediction.
- Both
goal probabilities
andtrajectory predictions
are used to inform a Monte Carlo Tree Search (MCTS
) algorithm.
- Both
- How to reduce the search depth? Using
macro actions
.-
"To keep the required search depth shallow and hence efficient, both
inverse planning
andMCTS
plan over macro actions." - In hierarchical
RL
,macro actions
are sometimes defined by a tuple <applicability condition
,termination condition
,primitive policy
>. - Here (I am a little bit confused):
- Each manoeuvre also specifies applicability conditions (
lane-change-left
is only applicable if there is a lane in same driving direction on the left of the vehicle) and termination conditions. -
"Macro actions concatenate one or more manoeuvres, [and] automatically set the parameters in manoeuvres [e.g.
driving distance
forlane-follow
orlane-id
s ingive-way
] based on context information (usually road layout)". - But no underlying primitive policy is used:
-
"
Macro actions
as used in this work do not define a hierarchy of decomposable actions; they simply define sequences of actions."
- Each manoeuvre also specifies applicability conditions (
- By reasoning on a high level of abstraction,
macro actions
offer a temporal abstraction that relieves the planner and ensures a low computational cost.
-