Page 65 - Read Online
P. 65
Huang et al. Complex Eng Syst 2023;3:2 I http://dx.doi.org/10.20517/ces.2022.43 Page 9 of 20
w in the world coordinate system at the coordinates uv in the pixel plane system can be obtained.
1
= ℎ( bw , ) = cb bw w (11)
w
where cb is the transformation matrix from the vehicle coordinate system to the camera coordinate system,
bw is the transformation matrix from the world coordinate system to the vehicle coordinate system, and is
the axis coordinate of the feature point in the camera coordinate system, is the camera internal parameter
matrix. According to (11), the elements in the HD map are projected into the pixel plane.
5.2. Feature association
To use an HD map for localization, the location of objects detected by the sensors on the HD map needs to
be known. This step is called feature association. Feature association locates HD map elements that match
the features detected in the camera images. The correct selection of map features can significantly improve the
localizationresults. Inthisstudy,wechooselanelineelementsasmapfeatures. Thisisbecauselanelinefeatures
are easy to detect, have a long duration, and have good reflection properties, and have a high detection success
rate in environments such as nighttime. The map elements are reprojected to the pixel plane (map features),
and the distance between the detected elements (perceptual features) is calculated and used to evaluate the
localization results.
Define the perceptual feature as consisting of kind and shape , i.e. = { , }. For lane line perception
feature , the slope difference of lanes on the same road section is very small. There is a possibility that distant
lanes may be included in the HD map reprojection process by mistake. To better distinguish lane lines on
the same road section, the shape is defined to consist of a sequence of lane line points and their slopes :
= { , }.
Based on the consistency of the local structure, the map feature reprojection error is calculated. Then, coarse
matching of features and HD map perceptual features is performed. If the reprojection error is too large, the
gap between the map and perceptual features is considered too large and will not be matched and optimized.
The algorithm continues only when the error is less than a certain threshold. Define the map feature as and
given camera perceptual feature , consider the confidence that a feature belongs to a certain class with
probability ( | ) given by the target detection module. Assuming that the shape detection noise obeys a
normal distribution, this is combined with computing the feature’s likelihood probability ( | ).
( | ) = ( | ) ( | , ) ( | ) (12)
For the lane lines, define the likelihood probability ( | ) of the shape.
2
− 2
1 ¯ − ¯
− 2 − 1
( | ) = + (1 − ) 2 (13)
where and are the slopes of the lane lines in the map feature and the perceptual feature, respectively, and
¯ and ¯ are the average coordinates of the sampling points of the lane lines on the -axis in the map feature
and the perceptual feature, respectively. is the variance of the lane slope. If the likelihood probability ( | )
is greater than a certain threshold Th, this map feature and the perceptual feature are considered as a pair of
coarse matches = { , } for the same feature.
Considering the map structure consistency, the perceptual feature structure should be similar to the map fea-
ture structure. After coarse matching, the distance between two of each map feature and the distance between
two of the matching perceptual features is calculated, as shown in Figure 4. These two sets of distances are