Calibration of the Olympus UZ-500
1.
Introduction
The goal of our scanning project is to build a 3D scanning system with two digital cameras and one
PMD camera. But these cameras all have a distortion, which would make the matching of the
pictures very hard. So one of the main tasks of our project is the calibration of our cameras, so we
could deskew them.
Two important methods exist, which could be used for calculation of the intrinsic parameters: The
Tsai algorithm and the Zhang algorithm. Tsai's algorithm needs pictures of a test field and exact
data about position and movement of camera and test field. The algorithm of Zhang is more
complex and needs more computing power. But for calculation with the Zhang algorithm are only
pictures of the test field needed, without any position data. So we chose Zhangs calibration method,
because it is easier to handle and computing power is no problem with modern systems.
2.
Basics
We will now look how the Olympus UZ-500 was calibrated. We took many pictures, cam checker
calculated the intrinsic parameters and then we deskew some pictures. Now we introduce step by
step, what is important, when a camera should be calibrated.
2.1. The experimental setup
We used the Olympus camera, one laptop and a black/white chessboard for the setup. The
chessboard was fixed on the wall and the camera stood on its tripod in front of the chessboard.
(the experimental setup)
For every zoom and focus setting we made ten pictures from different views. The camera was
slightly moved after taking a picture. With our automated capturing tool, which is presented in the
tool documentation by Sebastian Szczepanski, we could take pictures for one zoom and all 120
pg_0002
focus positions serial. The camera stands on the first position. Then 120 pictures were made serial,
without moving the camera. The capturing tool saved all pictures in separate folders, so they are
easy to find. After the 120 pictures of the 120 focus settings we moved the camera and took the next
120 pictures. That was repeated until we had 10 pictures for all 120 focus settings for the one zoom
setting.
The intrinsics will be calculated by the tool Cam Checker. This tool is also presented in the
documentation of Sebastain Szczepanski. Cam Checker needs pictures with very good quality. So
we now look what is important for the quality for good pictures.
2.2. Picture Quality
Depth of Field
In our setting we ran through the scope of focus for one zoom setting. The camera position were not
changed while running through the scope of focus, so the focus plain was not always in the image
plain, and the pictures would have been not sharp enough for Cam Checker. We avoided that
problem by setting the camera apperture as small as possible and taking pictures with longer
exposure time. With small camera aperture, the depth of Field is at maximum.
(picture from
www.wikipedia.org
)
In that schematic, D(F) is the farest point, where the picture is yet sharp. D(N) is the nearest point,
where we get sharp pictures. The distance between D(F)
and D(N) is raising with smaller camera
apperture. So we can get sharp pictures for every focus setting.
Picture Brightness
We needed also a constant illumination.
(chessboard illuminated with a spot light)
The left picture shows how the photography looks when we illuminate with a spot light. The right
picture shows how Cam Checker tries to make an black/white picture, for finding the contours. In
this picture it is impossible to find all vertices. With the Threshold we could make the bright parts
more dark, but then the good areas would get too dark.
Capturing Angle
One other important factor is the capturing angle.
pg_0003
(picture with large capturing angle)
When the angle was too big, Cam Checker had also problems with finding all vertices. So we
differed the angle only from 0° to 30°. The results were then all good.
Reflective Materials
The last problem we had was with reflective materials. Especially glue strips can be a trap, when
they aren't used carefully.
(in the red areas are glue strips)
On the pictures we can see that Cam Checker has problems to find all vertices in the areas, where
glue strips are. When non-reflective paper is used and all glue strips are away from the squares, then
this problem should be managed.
3.
Deskew Pictures
First we look at the two different types of distortion, which exist in photography: tangential and
radial distortion. The tangential distortion is a shift of a pixel along the tangent to its original
position.
(tangential distortion; picture from
www.wikipedia.org
)
pg_0004
Radial distortion is the second form of distortion. Here the pixel will be shifted along the radius to
the optical centre. The effect is raising with the distance from the optical centre.
(tangential distortion; picture from
www.wikipedia.org
)
According to the papers of Zhang and Tsai, the tangential distortion can be unattended and only the
radial distortion is important for the error.
We made pictures for all 120 focus settings, with the zoom setting 63. With these pictures we
calculated the intrinsic parameters, which we used to deskew first pictures, to test how good the
outcomes are. We deskewed a picture of our chessboard.
(the original picture – the red line should be even to the squares)
In this picture one can see, that the squares are not even. After deskewing the picture with the
calculated intrinsic parameters, it looks much better.
(the deskewed picture – now the red line is even to the squares)
Now another picture, with the difference between original an deskewed picture. There one can see,
how the distortion is raising with the distance from the optical centre. In the centre is only a very
slight distortion, while in the we can see a big difference.
pg_0005
(difference between original and deskewed picture)
4.
Evaluation and error estimation
One big problem is the massive amount of data and time requirement, when pictures should be
made for every combination of focus and zoom. That would be 62400 pictures, which would need
over 200 GB of disc space and about 400 hours time, to take the pictures and then evaluate them. So
we need to reduce the required pictures.
We will now check, which parameters are important for good outcomes and which ones can be
unattended, without a big error.
In the following chart, we see the trend of the distortion parameters over the focus for one zoom
setting. The zoom was set to 63 and the focus was varied from 121 to 240. The blue line shows the
important k1 and the red line the less important k2. k1 is the first part of the polynomial, which
describes the distortion. k2 is the second part of the polynomial, which has yet lesser influence on
the distortion. k3 and so on can all be unattended, due to there very small influence on the
distortion.
(chart, which shows the influence of the focus setting)
On the chart, we can see that the parameters show no tendency over the focus. They vary around an
average. The variation is bigger for k2, but k2 has less influence. For k1 we get an average, with a
standard deviation of 2,7%.
Now we see a chart, which shows us the influence of the zoom setting. For two zoom settings (63
and 104), there is k1 over the complete focus range. The red line shows k1 for zoom 104 and the
blue line for zoom 63.
pg_0006
(chart, which shows the influence of the focus setting)
Here we see that for other zoom settings, the averages change. With higher zoom we also see, that
the variation around the average is raising. The standard deviation for zoom 104 is about 30%.
We should find out, why there are such big errors in these outcomes. Several sources of error may
exist. The error, which humans bring in, is often a big part of the total error. But in this case the
humans cannot really make mistakes. The camera makes the pictures, the pictures are loaded onto a
laptop and evaluated by the tool Cam Checker. Positioning of the camera should not have influence,
when the Zhang algorithm is used.
In this workflow, the first part which can make errors is the camera. When we look closely on the
camera, we can see that the objective is not firm. It can move a bit in its holding. So it is clear, that
every time the camera is moved, we will get other pictures, with other distortions and errors.
The second part is the conversion of the RAW format picture into the high resolution jpg. No
zipping can be made without loosing information, so we also here will get an error. The last part is
the Tool Cam Checker. It uses the algorithm of Zhang. For that, Cam Checker also arranges the
pictures. The pictures are converted into black/white pictures, where information is lost again.
So we made some tests to find out which errors have big influence on our outcomes. First we tried,
what happens when we let Cam Checker analyze more then once the same pictures. We observed,
that the outcomes remain exact the same.
Next we tested the camera. We made five times for zoom 63 and focus 121 to 130 ten pictures, and
then calculated the intrinsics with Cam Checker. The setting was each time the same: same
illumination, same test field, same camera settings and same Cam Checker settings. Only the
camera was moved on different positions, for different pictures. For the Zhang algorithm the
position of the camera is not important. So changing the position should have no influence on the
outcomes.
(five graphs which show k1 for zoom 63)
In this chart we see five graphs, which should show the same values, expect from the camera error.
pg_0007
So the error of the camera is about 20%, which is really much.
5.
Conclusion
We have seen that zoom has a big influence on the distortion and that changing focus can be
ignored. Furthermore we have found three sources of error: the camera, the jpg transformation and
the tool Cam Checker. But the error of the camera is so large, that the others sources haven't much
influence.
For a complete calibration of the camera, it would be enough to take an average for every zoom
setting over the focus. Taking pictures from 20-30 focus settings per zoom should be enough and
for each focus 8 pictures. So the amount of 62400 pictures can be reduced to 8000-10000 pictures.
But due to the camera, the error will still be quite big. Standard commercial cameras are not
accurate enough for better results.
6.
Literature
1) Zhengyou Zhang.
Flexible Camera Calibratin By Viewing a Plane From Unknown Orientations.
Technical report, Microsoft Research, 1999.
2) Roger Y. Tsai.
A Versatile Camera Calibration Technique for High-Accurarcy 3D Machine Vision
Metrology Using Off-the -Shelf TV Cameras and Lenses.
IEEE Journal of Robotics and Automation, 1987.
3) Iliana Dimitrova.
Hauptseminar Augmented Reality.
TU München.
4)
www.wikipedia.org
5) www.Olympus.com