Extrinsic calibration and aperture angle

By Dennis Kant

 

Extrinsic calibration in general

 

 

 

The main intention to calibrate the camera is to use the data of the

pictures correctly without any errors. With the extrinsic matrix we

get the orientation between the camera and the object we take the picture from.

 

We use the extrinsic calibration to confirm one object in different pictures.

In our final configuration we have three cameras and take pictures from one object.

With a known extrinsic matrix from every camera we can assign the orientation

among each others. So we can find equal pixels in different pictures.

 

 

 

 

The extrinsic matrix is divided into a translation and a rotation.

 

 

 

 

 

 

 

 

 

 

 

 

 

Our calibration method (theory)

 

In our calibration method we use a program called CamChecker. The algorithm method

follows the paper of Zhengyou Zhang.

The calibration setup is equal for the intrinsic and extrinsic calibration.

In this setup we take a few pictures from a given setup especially from a chessboard.

The orientation must change between the pictures. And we have to take at least 5 pictures

to get at least a few parameter. The program automatically finds connected squares.

 

 

 

 

 

 

 

We create for every picture one matrix, which is formed as follow:

 

 

 

 

The vector M contains the discreet intersection point in the picture,

and u and v are the pixel coordinates in the image. The dimension of the

matrix is dim = (2n, 9) where n is the number of intersections.

We need the solution vector for this matrix

 and get the Equation:

 

 

We solve this Equation with the SVD method.

The solution vector is one possible solution of this equation and has the dimension dim= 9.

With this solution vector we build the homography matrix H to the following mask:

 

We build this matrix for every taken picture.

 

 

 

The homography matrix map every image point to a 3D point. It’s defined as

The matrix A is the intrinsic matrix, r1 and r2 are the rotation axes of the camera and t is the translation vector.

 

To solve this equation we have to rebuild the homography.

We construct the vector vij as

follows:

 

 

 

 

 

 

 

 

With these vectors we build this Matrix.

We solve this equation again with SVD.

 

 

 

The solution vector b consists the intrinsic parameters, which are defined as follow:

 

 

 

 

 

 

 

 

 

 

 

 

This resulting intrinsic matrix A is needed to solve the previous equation.

 

 

 

 

 

By multiplying with the inverse of A we get

 

 

 

The matrix A is for every taken picture equal with same parameters.

To get the extrinsic matrix for every picture we solve this equation for every homography matrix.

 

 

 

CamChecker (results)

 

 

We tested the Theory with an experimental setup:

We take pictures from 6 different positions and measure the distance between the camera center and the chessboard center.

 

  Measured data                                    Calculated data

X

Y

Z

0.145

-0.44

3.49

0.645

-0.44

3.49

0.595

-0.36

2.44

1.055

-0.36

2.22

0.075

-0.36

2.515

0.56

-0.36

2.44

X

Y

Z

0.115

-0.413

4.358

-0.372

-0.409

4.429

0.662

-0.298

3.177

0.074

-0.273

3.166

0.151

-0.305

3.218

0.636

-0.310

3.242

                                                                      

                                                                                             

 

 

 

 

 

 

 

The measuring unit is meter (m).

This experimental setup uncovered that the resulting error is up to 7 cm. This error is too much compared to

measure manually, especially in the z direction.

These errors partly consist on the focal length error, which we do when we compare the camera model with a

lens system and the pinhole camera. Additional we set the z component in every calculation to 0. When we calculate the extrinsic matrix

we have to approximate the z component by the angle.

 

 

Alternative calibration method

 

Alternatively we can compute an OpenGL camera and compare the data from the picture with the resulting from the transformed OpenGL camera. With the Powell’s Direction Set algorithm we optimize the parameters into local minima’s. But to use this algorithm optimal we need to have very good initial values.

This method is used in OT2 which is computed by Uwe Hahne.

It could use additionally with the Zhang algorithm, because we get good initial values.

 

 

 

Aperture angle

 

 

The cameras we use have many zoom levels and we want to use them efficient. So we have to know the aperture angle of every zoom level. With a known aperture size we can calculate the minimized distance from the camera to the object. Additional we want to have maximal disparity for more depth information.

 

 

To measure the aperture angle we use the following experimental setup:

We take pictures for every zoom level from a known geometry at a known distance. The known geometry is placed coplanar to the image plane. Now we calculate the width of the intersection from the image pyramid and the geometry plane.

 

 

 

 

 

With the fill rate we get the width of the plane. To calculate the aperture angle we use the Tangents.

 

 

Now we must calculate the minimal object distance for our setup.

We must have total overlap of the object in both cameras.

 

Again we use the Tangents to calculate this distance.

First we calculate the distance to the intersection of the two

view pyramid from the cameras.

 

At second we need to know the minimal distance to the object with total overlap.

 

 

 

 

 

The calculated results are shown at the following Graph, with the parameters:

The X-axes shows the zoom level.

The Y-axes shows the object width.

The Z-axes shows the minimal object distance.

The maximal distance is at 7,5m limited by the PMD camera range.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To get the most disparity we have to choose the lowest zoom level, but to get more optical fill rate and more details at the image we have to choose a higher zoom level.

Most important is to be at or over this graph area, otherwise the object isn’t total overlapped in both cameras.

 

 

 

Literature

 

Zhengyou Zhang: A Flexible New Technique for Camera Calibration

   http://research.microsoft.com/~zhang/Calib/

 

Original CamChecker (with source code):

http://matt.loper.org/CamChecker/CamChecker_docs/html/index.html