Back to index page
Scanning real world objects without worries
One of the more challenging tasks in the automatic processing of data acquired from 3D scans is to do a coarse registration of different scans. In particular this means, that we want to find an rigid transformation that maps the points from one scan to the corresponding points of another. Once we have got this transformation further refinement can be done by using a fine registration like ICP.
We have implemented different ways of computing features within an image and we will give a short outline for each of them. A feature is needed to reduce the dense space of possible combinations for corresponding points.
The reference manual of the classes will give you an detailed description on how to use them. Here we only consider the theoretical background and the motivation to follow the implementation. For further reading we recommend the original papers as they will give a more elaborated description.
A common approach to compute features in 2D images is SIFT, the Scale Invariant Feature Transform, that was first proposed by David Lowe (1999). The Algorithm extracts features that are invariant to rotation, scaling an partially invariant to changes in illumination an affine transformations. The features generated should be highly distinctive. In short the algorithm works as follows:
The main idea behind SIFT is to select the features from distinct points in the difference of Gaussian. First off, the scale space is defined as the convolution of an image with a Gauss kernel with variance .
- Scale-Space extrema detection
- Keypoint localization
- Orientation assignment
- Generation of keypoint descriptors
The difference of Gaussian is the difference between two layers in scale space along the axis:
The difference of Gaussian provides a close approximation to the scale-normalize laplacian of Gaussian an hence a relationship to the heat diffusion equation
The localization of the keypoints is done by finding the extrema in the difference of Gaussian. First there is an threshold filter applied to the difference of Gaussian, to reduce complexity. After that global minima and maxima will be computed. In the next step the accuracy of the keypoints is increased by fitting a quadric to the local samples. The Taylor expansion can be used to find the extrema:
To reduce the number of features, first the points with low contrast are will be removed as they are more sensitive to noise. The paper from Lowe suggests to discard , where . Along that the features on edges are discarded as they are not well distinguishable. Following the approach by Harris & Stephens (1988), that edges have large principal curvature across the edge but small perpendicular to it, the ratio of principal curvature is computed in terms of eigenvectors by using the Gaussian and mean curvature using the Hessian matrix.
To find the orientation the gradients are pre computed and an orientation histogram of 36 bins weighted by the magnitude is used.
The features are derived from a 4x4 gradient window by using an histogram of 4x4 sampels per window in 8 direction The gradients are then Gaussian weighted around the center. This leads to an 128 dimensional feature vector.
The SIFT algorithm is only invariant to 3D rotation under orthographic projection. In our tests it was only stable for rotation around 30 degree. The library we used is called siftpp
As we deal with depth images, the SIFT algorithm will fail, as it only works reliable on textured images and depth images are in general smooth. We can interpret the depth images (or point clouds) as points scattered on a surface and therefore consider properties from differential geometry to define features. We follow the approach of Taubin (1995) to compute the principal curvatures and derive the features from them. To calculate the curvatures we first need to approximate the normals.
We adopted two approaches to calculate the normals for the surface. As we deal with depth images the points lie on a regular grid and hence we can simply use the cross-product with respsect to adjacent points to calculate the normals. This is very fast but obviously sensitive to noise. We can also approximate the tangent space at a point by minimizing
where is the normal vector of the plane and the distance of the minimizing hyperplane through . We can now consider this as an eigenvalue problem of the convariance matrix where . The eigenvector with smallest eigenvalue is then our normal. Again, as we assume that we have depth images, we can fix the orientation of the normals, so they always point in direction of the camera.
The approach by Taubin was generalized by Lange and Polthier (2005) for point set surfaces and can be summarized as follows:
For a unit length vector in the tangent space in of a surface the directional curvature can be obtain as
where is the surface normal and a curve in the surface with and . We can now define a discrete directional curvature where
The directional curvature satisfies
Where is relative to a basis of with , and
We now assume that and are principal curvature directions and hence the Weingarten map is . Such we can solve:
where . From that we can derive
and after discretization one approaches
The eigenvalues and eigenvectors of the discrete shap operator are now the principal curvatures and principal curvature directions in .
The points with maximal mean curvature will be considered as features. We implemented an priority queue for that. As the highest values may be outliners an optional number of features can be skipped.
The main problem with this curvature based approach is, that it is sensitive to noise. We may note, that in our tests we only considered synthetic images without noise and we did no exact error analysis. Another problem is, that the number of features can be quite high, as there are may be large regions with high curvature and hence the features might be ambiguous. Finally the curvature based features will of course fail on surfaces with constant mean curvature like spheres.
Lee, Varshney and Jacobs applied the concept of the difference of Gaussian on the curvatures of a surface. The saliency they define has different applications like simplification algorithms. But in essence delivers frequency ranges of the curvature. We adopt this and in a first step define
From image to curvature
as the Gaussian-weight average of the mean curvature at a point with variance . The saliency then is defined in terms of the difference of Gaussian:
Where is the standard deviation of the Gaussian filter at scale . The paper suggests a scales of where is of the length of the diagonal of the bounding box.
Like for the curvature we choose our features to be the one with highest saliency.
The problems from the plain curvature approach now moves on to the choice of an reasonable value of . But the effect of noise can be reduced this way as we can choose a higher value of \đ$$ and thus smooth out noise. Still features for surfaces with constant mean curvature deliver are not well defined. Like the curvature the number of features has to be quite high to have an sufficient number of potential matches.
Until now we only considered scalar (or possibly more dimensional features when considering the principal directions) values for each feature. The problem is, that those values are not very distinct and will not allow to do a precise matching. An other approach to determine a coarse rigid transformation was proposed by Pingi et. al as "`Exploiting the scanning sequence for automatic registration of large sets of range maps"'. They require to have consecutive range maps that overlap for at least 15 to 20%. The basic principles of the algorithm will be explained in short. We adapted the feature metric they proposed and use it do make our features more distinguishable.
The algorithm can be outlined in the following steps given two depth images and :
Assume we have a depth (range) images , then the features are described by a kernel around a point where the entries of the matrix are and denotes the normal at the point and are the adjecent pixels around .
- Look for (a small set of) feature points in
- For each of the features look for a match
- Take 3 points and check if the transformation is good
From there we can compute the variance of each kernel as:
where . This also approximates the curvature of the surface. This way the algorithm works only in the image space, hence the gaps on boundaries (or other transistion with a disconutiy) are treated as connected and thus result in a high curvature. Thats why they only choose the points with medium variance (medium curvature) as features.
The metric to measure distances between features is given by the frobenius norm of the difference of the kernels:
Then features with smallest distance to the other image to match are chosen. Call this set .
Generate for all combinations of the points a transformation and take the one with minimal
That is we compute the transformations for all possible pairs of matches each time for 3 pairs and choose the one with the lowest error with respect to this 3 pairs. If restart and pick again some random features.
We adopted the feature metric
for the other features we defined. That is we take the neighborhood of the features to make them more distinguishable.
There are some improvements that can be made and some ideas that where not implemented yet. One idea would be to follow more directly the concept of difference of Gaussian in the sense of SIFT when dealing with curvatures. This means that we do not look at a fixed variance to compute the saliency but to search along the different variance (scales) for extrema and define features this way. Moreover there might be better ways to describe the feature vector itself as the description by an neighborhood matrix is not invariant under rotation around the camera axis. We have to assume that the depth maps are taken in a reasonable sequence with the current descriptor. One other way to define the kernel might be to orient it by the principal frame of a given point.
G. Taubin, Estimating The Tensor Of Curvature Of A Surface From A Polyhedral Approximation, Fifth International Conference on Computer Vision (ICCV'95).
C. Lange and K. Polthier, Anisotropic Smoothing of Point Sets, Special Issue of CAGD 2005
Chang Ha Lee and Amitabh Varshney and David W. Jacobs, Mesh Saliency, ACM SIGGRAPH 2005
Paolo Pingi et al., Exploiting the scanning sequence for automatic registration of large sets of range maps, EUROGRAPHICS 2005
David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, International Journal of Computer Vision, 2004