Deprecated: Function set_magic_quotes_runtime() is deprecated in /home/ianguth/ on line 18

Strict Standards: Declaration of Walker_Comment::start_lvl() should be compatible with Walker::start_lvl(&$output) in /home/ianguth/ on line 0

Strict Standards: Declaration of Walker_Comment::end_lvl() should be compatible with Walker::end_lvl(&$output) in /home/ianguth/ on line 0

Strict Standards: Declaration of Walker_Comment::start_el() should be compatible with Walker::start_el(&$output) in /home/ianguth/ on line 0

Strict Standards: Declaration of Walker_Comment::end_el() should be compatible with Walker::end_el(&$output) in /home/ianguth/ on line 0

Strict Standards: call_user_func_array() expects parameter 1 to be a valid callback, non-static method GoogleSitemapGeneratorLoader::Enable() should not be called statically in /home/ianguth/ on line 339
Iterative Closest Point | Ian

Iterative Closest Point

There are many different ways to accomplish scan matching, and out of them I chose to use Iterative Closest Point (ICP). I chose ICP because the eventual platform it is intending to run on has an mecanum omni-drive, so unlike the erratic, it does not have any odometry data to seed the scan matching with, so methods such as Lodo, though faster, would not work.

Before the ICP algorithm I wrote runs, it first prepossesses the data to clean up the scans bit. The preprocessing starts by removing any point more than three meters away from the robot. Next, the algorithm removes as many points as possible that are not in some sort of cluster. This is accomplished by removing any point that is more than half of a meter away from the average of its immediate neighbors. After running the first two parts, the scans will usually end up being different sizes. This is remedied by determining a rough estimation of correspondence between the two scans. To accomplish this, the preprocessor iterates through the reference scan and finds the closest unique point for each point in the old scan; throwing the point out if the distance between the two is greater than half of a meter. This section of my code has not been optimized.

The actual ICP algorithm consists of an error function, and some search function to try to minimize the error. Scipy already has a good search function, SciPy.optimize.minpack.leastsq(), for finding the minimum error between 2 sets of points. The error function is more complicated, and takes more experimentation. After testing several error functions, including:

  1. The sum of the distance between each point in the previous scan and every point in the reference scan.
  2. The sum of the distance between each point in the previous scan and the point furthest away in the reference scan.
  3. The sum of the average distance between each point in the previous scan and the reference scan.
  4. The sum of the distance between each point in the previous scan and the closest point in the reference scan.

After trying each error function, I was still not getting satisfactory results. I then tested calculating the error by running (4) from both the previous scan and from the reference scan, and then selecting whichever distance was smaller. This ended up producing satisfactory results.

Before continuing to explain my algorithm and possible optimizations, I would like to address the validity of my error function. It is hard to make a good comparison because I do not have any ground truth data, and the since I am using my own error metric to determine the resultant error of specific transformation, be it from ICP or odometry, the results will end up biased towards my algorithm. I believe that comparing the following sample of graphs gives a indication that my algorithm is more accurate than odometry:

480px-scan15_scan16_icp 480px-scan15_scan16_od
480px-scan18_scan19_icp 480px-scan18_scan19_od
480px-scan24_scan25_icp 480px-scan24_scan25_od

This does not mean that there are not combinations of scans that do not converge well under my algorithm, including scans with few or no large features, scans with several similarly shaped features next to each other, scans with a lot of “noise” (this can be remedied by better pre processing), and scans that preprocessing returns a poor initial correspondence estimate on. (My preprocessing algorithm relies on the assumption that the shift and rotation between scans is small, this usually works well, but if there is a large shift or rotation between scans, points that are near each other before ICP runs are not actually supposed to be near each other. This manifests as solitary points on the resulting graphs.)

The error can be reduced further by performing successive passes of the ICP algorithm on partial sets of the scan data before running it on the full scan. Running with three passed (every tenth point, every third point, and the full scan) did not affect run time significantly, but did increase the accuracy by a small amount. Running ten passed decreased the error by 24%, but has a significantly longer run time.

Error Analysis

Because of the large number of iterations that are inherent in ICP, I am not certain that it can be performed at a satisfactory speed in python, and would likely need to be converted to C. Currently the plain ICP and the three pass both averages about eight seconds, while the ten pass averages around eleven seconds. Interestingly all three versions have a very high variability in run time, sometimes converging in three to four seconds. I believe this is because the leastsq() function can take a while to find the minimum if the initial error between the two scans is large. Also, running on larger data sets, like if I limit the data to five meters instead of three does not decrease the error significantly, but it does more than double the run time.

Seeding the ICP algorithm with a pose estimation results in a smaller variance of error and a shorter run time, but not a significantly smaller mean error. Since the rotational error of my ICP algorithm appears to be high relative to its translational error, I attempted to seed it with the rotation component with a pose estimation. The mean and variance of the error ended up being significantly worse. This is likely because the rotation component of odometry has high drift already. Both tests were run with a standard distribution (the standard deviation was 15% of the magnitude of the odometry) of error around the odometry. This served to approximate some fairly poor estimation scheme, such as dead reckoning. A better estimation would of course lead to faster convergence and lower error.

  • Share/Bookmark

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Cool places to visit!


All entries, chronologically...