This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
Name | Udacity Mail Account |
---|---|
Heiko Schmidt | [email protected] |
In this project, a ROS environment is used to control a car driving on a multi lane street. The goal is to drive one full lap while passing given waypoints, detecting traffic light states and stop in front of a corresponding stopline to the traffic light.
The waypoint updater node's task is to publish a fixed number of waypoints in front of the car's current position. The waypoints are provided by data from Udacity at specific points in space and have a target velocity attached. This target velocity must not be higher than the speed limit and needs to be adjusted to make the car come to a stop at a red traffic light's stop line. The movement is controled by the drive-by-wire node.
The DBW node represents the car's controller. It's task is to control the throttle, brake and steering and published all as ROS twist commands to the car. It uses the twist controller module.
The twist controller is used in the above mentioned DBW node to control acceleration and steering. The module uses controllers (PID for throttle and steering, low pass for speed).
The traffic light detection and classification node is used to identify the nearest traffic light, to check it's state from given camera images and to coordinate where and when to stop and when to start driving again.
For this task I used the Tensorflow Object Detection API. For training, I used a pre-trained Single Shot MultiBox Detector model from here.
Here are some example image from the classification:
- I had massive performance problems when camera is switched on in the simulator, no matter if I used the given VM from Udacity, the docker image, the workspace or even a local installation on a well powered machine with GPU. Even without detection the car isn't able to follow the waypoints due to massive delays. The only way to make it run was a Ubuntu live system on my working PC
- To address the performance issue, the frequency for the waypoint publishing was decreased to 25Hz, and the publishing of obstacle and lidar data as been stubbed
- While waiting at a traffic light the car sometimes is not able to full stop and starts accelerating for a short time, before it's stopping completely. This is currently addressed by stopping four waypoints in front of the stopline waypoint.
- As this is an individual submission and the software is not intended to run on Carla according to Udacity, the net was trained against real world images but never optimized on that and there's no distinction in the software to select different models for real world and simulation.
Note I shrinked down the repository by deleting the classifier training data. This is due to the limitation for the Udacity project submission in repository size. If you need any data or information on that, please feel free to contact me.
Please use one of the two installation options, either native or docker installation.
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstone
Run the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone
To set up port forwarding, please refer to the instructions from term 2
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip
- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images
Outside of requirements.txt
, here is information on other driver/library versions used in the simulator and Carla:
Specific to these libraries, the simulator grader and Carla use the following:
Simulator | Carla | |
---|---|---|
Nvidia driver | 384.130 | 384.130 |
CUDA | 8.0.61 | 8.0.61 |
cuDNN | 6.0.21 | 6.0.21 |
TensorRT | N/A | N/A |
OpenCV | 3.2.0-dev | 2.4.8 |
OpenMP | N/A | N/A |
We are working on a fix to line up the OpenCV versions between the two.