We have implemented different ways of computing features within an image and we will give a short outline for each of them. A feature is needed to reduce the dense space of possible combinations for corresponding points.

The reference manual of the classes will give you an detailed description on how to use them. Here we only consider the theoretical background and the motivation to follow the implementation. For further reading we recommend the original papers as they will give a more elaborated description.

- Scale-Space extrema detection
- Keypoint localization
- Orientation assignment
- Generation of keypoint descriptors

The difference of Gaussian is the difference between two layers in scale space along the axis:

**DoG**

The difference of Gaussian provides a close approximation to the scale-normalize laplacian of Gaussian an hence a relationship to the heat diffusion equation

and therefore

To reduce the number of features, first the points with low contrast are will be removed as they are more sensitive to noise. The paper from Lowe suggests to discard , where . Along that the features on edges are discarded as they are not well distinguishable. Following the approach by Harris & Stephens (1988), that edges have large principal curvature across the edge but small perpendicular to it, the ratio of principal curvature is computed in terms of eigenvectors by using the Gaussian and mean curvature using the Hessian matrix.

**Maximum**

**Orientation**

**Descriptor**

**Features**

where is the normal vector of the plane and the distance of the minimizing hyperplane through . We can now consider this as an eigenvalue problem of the convariance matrix where . The eigenvector with smallest eigenvalue is then our normal. Again, as we assume that we have depth images, we can fix the orientation of the normals, so they always point in direction of the camera.

For a unit length vector in the tangent space in of a surface the directional curvature can be obtain as

where is the surface normal and a curve in the surface with and . We can now define a discrete directional curvature where

The directional curvature satisfies

Where is relative to a basis of with , and

We now assume that and are principal curvature directions and hence the Weingarten map is . Such we can solve:

where . From that we can derive

and after discretization one approaches

The eigenvalues and eigenvectors of the discrete shap operator are now the principal curvatures and principal curvature directions in .

**From image to curvature**

as the Gaussian-weight average of the mean curvature at a point with variance . The saliency then is defined in terms of the difference of Gaussian:

Where is the standard deviation of the Gaussian filter at scale . The paper suggests a scales of where is of the length of the diagonal of the bounding box.

**Saliency**

**Saliency features**

**Scanning sequence**

- Look for (a small set of) feature points in
- For each of the features look for a match
- Take 3 points and check if the transformation is good

From there we can compute the variance of each kernel as:

where . This also approximates the curvature of the surface. This way the algorithm works only in the image space, hence the gaps on boundaries (or other transistion with a disconutiy) are treated as connected and thus result in a high curvature. Thats why they only choose the points with medium variance (medium curvature) as features.

**Features**

Then features with smallest distance to the other image to match are chosen. Call this set .

That is we compute the transformations for all possible pairs of matches each time for 3 pairs and choose the one with the lowest error with respect to this 3 pairs. If restart and pick again some random features.

for the other features we defined. That is we take the neighborhood of the features to make them more distinguishable.

C. Lange and K. Polthier, Anisotropic Smoothing of Point Sets, Special Issue of CAGD 2005

Chang Ha Lee and Amitabh Varshney and David W. Jacobs, Mesh Saliency, ACM SIGGRAPH 2005

Paolo Pingi et al., Exploiting the scanning sequence for automatic registration of large sets of range maps, EUROGRAPHICS 2005

David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, International Journal of Computer Vision, 2004