Measuring the error
and improving the PMD data

Introduction

The PMD Data need to be as precise as possible in order to be used with the stereo images. In this document we will explain a way to evaluate the accuracy of the PMD Data and explore some algorithms to improve these data.

Accuracy calculation (Standard deviation)

Two types of accuracy may be evaluated: the absolute accuracy and the relative accuracy.
Measuring the absolute accuracy consists of comparing the distance measured by the camera for one pixel with the real distance of this pixel (which may be measured with a laser).
In this document we just consider the relative accuracy. We use the standard deviation in order to evaluate this relative accuracy.

Determination of the points coordinates

The first thing we need is to calculate the world coordinates of each point using the distance information delivered by the camera. For this purpose we just need a few geometric considerations.

Points Coordinates

Standard Deviation

The camera is placed in front of a wall. We get a set of points distributed around an ideal plane (the wall).

Camera in front of the wall

Each point has distance di to the ideal plane.

Distance from points to ideal plane

The ideal plane is calculated by using a least squares algorithm. This ideal plane minimize the sum of the squares of the di (for the implementation see ls.cc and ls.h). This ideal plane is defined by a point P0(x0,y0,z0) (mean of the points from the measure set) and a normal vector (a,b,c). The standard deviation is then calculated as the square root of the mean of the squares of the di.

To check the relevance of the standard deviation we made a sequence of measures with the same configuration. As we can see in the next picture the standard deviation has a very low variance and can be considered as a pertinent way to evaluate the relative precision of the PMD Camera.

Set of measures of the standard deviation

Influence of the distance

Influence of distance

 Due to technical limitations we limited our measures to a range [900mm,2200mm].

To compare we can see in the next picture a few results from [reulke] with the PMD[vision]® 19k. We can notice that for equivalent distances the standard deviation of the PMD[vision]® 19k seems to be lower than the one of the PMD[vision]® 3k-S.

PMD 19k standard deviation

Influence of integration time

Increasing the integration time should increase the accuracy of the data, since it increases the SNR (Signal to Noise Ratio). The statistical uncertainty of the measure in inversely proportional to the SNR:

Noise formula

But the results we obtain are very unexpected. First of all the PMD 3k-S works with a much lower integration time than the PMD 19k. This is due to the SBI (Suppression of Background Illumination) of the 3k-S which allow to increase considerably the SNR at low integration times. We observe around 5000µs an unexplained increase of the standard deviation.

Integration time

Filtering distance errors

Mean over several measurements

Our idea was to take for each pixel the mean over a set of several measures. It is a way to increase "virtually" the integration time and to avoid saturation effects in the sensor. The results show that this algorithm does not reduce the standard deviation. Some physical limitations may be responsible. The next figure show the standard deviation in function of the number of measures used to calculate the mean.

Mean algorithm

Error correction for changing reflection properties

Changing properties in the object texture may influence the accuracy of the data. If for some pixels the amplitude of the reflected signal is too low, the evaluated value of the distance may be very inaccurate. We can see this effect in the next two pictures. A chess game is hanging on the wall. The first picture show that the distance of the black squares is not the same than the one of the white ones! On the second picture you can see the amplitude received by the PMD camera for each pixel.

Distance with chess game

Amplitude with chess game


The main idea of the correcting algorithm is too apply an intelligent filter making use of the amplitude information of the camera. In the following example we use a very simple filter which interpolates the "bad pixels" with the mean of the "good pixels" in a window of size 2*m+1 around each bad pixel. The c parameter allow to set the frontier at which a the amplitude is considered as being too low.

Intelligent filter

This algorithm has of course very good results for a wall picture since the filter flattens the data! Using this algorithm with more complex situations implies to make a good choice of m and c. With a too big m this algorithm will flatten the data and we will lose information. This algorithm is actually designed to be used in cases where the "bad reflecting areas" have a limited surface. m should be in the order of magnitude of these areas.

References

Antoine Mischler, 28-02-2007
Scanning real world objects without worries, TU-Berlin