Basic: What is sensor fusion?
- Many sensors are equipped in a car. Integrate different cameras, radars can help to achieve a acurate status of a car in the environment.
- However, every sensor has its pros and cons. e.g. take two sensors in a cellphone, accerometer is not accurate while error will not be accumulated. gyro is accurate in returning the changes, while errors will be accumulated.
- Kalman filter is a classic tool to do sensor fusion. What is high defition map? Why a high definition map is needed? Why sensor fusion is needed? What is the relationship between sensor fusion and HAD? What have you done for sensor fusion and HAD?
What are sensors in a car?
- CMOS cameras will be blinded in a rainy and floggy whether
- Radars perform well in bad weather condition while their resolution is not that good.
- LiDAR ToF Technology using light. based on different reflection of signal it determines the surface. e.g. diffuse reflection, retro-reflection
One example for sensor fusion in a car?
- Back camera and ultrasonic range finder for parking
- Front camera and multi module radars for ADAS
Sensor fusion systems
- centralized
- distributed
Practical experience about Sensor Fusion, Map and Cloud:
- Apollo Cloud platform
- HD Map: openDRIVE
- OTA
- Data Platform
Companies:
- Civil Maps
- base map
- camera view, voxel view
- get feature points based on the base map(SLAM)
- second car can reuse the based map and localizes itself
- 80% - +/-5cm
- How to use LiDAR C/C++
HD Map
- The importance of HD Map in Autonoumous Driving
- From cloud platform perspective, simulation system can not work without HD map. The goal of a simulation system is to reconstruct real roads, traffic and environment. And it is used for training algorithms. On the other hand, HD Map in the cloud can act as a data source for cars in the road. It supports autonomous driving mainling in four aspects, localization, perception, decision and planning.
- Localization: it is known that in combination of GPS, IMU, current map is working perfectly for cars. However, current accuracy of localization can not meet the requirement for localization in autonomous driving. Different sensors are used for percept and localize a car. A quick solution is to have a map which contains crucial information for driving. And it should as detailed as possible. Then a low cost autonomous driving solution can be proposed. A single lens camera can be used to capture image. Lanes can be used to compare with the lane information in the HD map to determine the location horizontally. Traffic light, light and lighting pole can be used for localization vertically.
- Perception: At first, sensors can percept the environment up to 1km away. HD Map can provide you information much more far away from it. Secondly, through comparison between the information which is captured actively from sensors and HD map, it helps to detect objects, like vehcile and pedestrian. To aggregate different regions which are needed by perception module, HD Map can provide region of interest(ROI). Moreover, information from HD map can provide semantic meaning. For example, different kinds of traffic light system have different number of lights. If it knows that how many lights it should detect and percept. It will help a lot to design perception algorithms.
- Decision and Planning: Including the realtime update of map, it helps to plan efficiently.
- HD Map can help to reduce number of sensors. A feasible solution can be hardly proposed without HD map.
- Main procedure to create an HD map
- Data sourcing
- Image, Point Cloud, GPS Track
- Pre-Processing
- Sensor� fusion which combines information from GPS, IMU, LiDAR, and camera
- Deep learning for segmentation and detection
- Manual Verification
- human being is needed to increase accuracy
- Release
- Usage of HD map
- HD Map, ADAS Map Infotainment map
- Format of HD map, openDrive
- Usage of HD map
- Update
- A update cycle exists between cloud of map provider and car. Here is the update cylce of ApolloAuto HDMap from Baidu.
- Data sourcing
- Sensor fusion in the cloud
Point cloud from LiDAR is pre-processed in the car, and post-processed and fused in the cloud, creating a continuously updated 3D map for SLAM.
- Basic theories:
- Coordination systems: Ego vehicle reference frame, Homogeneous coordinates
- Sensor fusion algorithms:
- Bayesian filtering
- Kalman Filter and Extended Kalman Filter
- Particle Filter
- Clustering algorithm
- K-means
- Single Linkage Clustering
- Methods for map extraction from point cloud
- Dense reconstruction algorithms: Dense reconstruction is widely used for 3D printing, face recognition. A low cost solution(KinectFusion) which uses RGBD camera was proposed in 2013.
- Sparce reconstruction algorithms:
- Procedures for map extraction:
- Compact map representation
- Localization algorithm
- Basic theories: