About

CALTag is a self-identifying marker pattern that can be accurately and automatically detected in images. Detection is robust to occlusions, uneven illumination and moderate lens distortion.

Chequerboards are often used for camera calibration, with the interior saddle points providing the necessary point correspondences. Manual identification of these points is at best tedious, and at worst, infeasible and unreliable (especially when calibrating large arrays of cameras). By augmenting the pattern with self-identifying binary codes, much like the excellent ARTag system, this process can be automated.

Whereas ARTag has highly robust error detection and correction, it suffers from licence restrictions, and a somewhat inaccurate corner detector. CALTag employs rudimentary error detection, but the code is free to use and modify, and it detects corners using a very accurate saddle-point finder.

Download

  • CALTag source code

    Note that Version R2010a of MATLAB introduced a new syntax for finding connected components. Porting to previous versions, or Octave, should be possible without too much effort.

    Some third-party code (from libraries written by Peter Kovesi and Jean-Yves Bouguet) is required to run the program. You will be prompted to download the necessary files if CALTag cannot locate them.

  • Paper (PDF, 8.6 Mb)

    Atcheson, B., Heide, F., Heidrich, W. CALTag: High Precision Fiducial Markers for Camera Calibration. 15th International Workshop on Vision, Modeling and Visualization. Siegen, Germany. November, 2010.

  • Calibration source code

    An implementation of Zhang's algorithm. It duplicates functionality in OpenCV and the Matlab Camera Calibration Toolbox, but may be useful since the code is quite short and easy to modify. It integrates with CALTag.

You are free to use and modify the code as you wish for non-commercial purposes. If you publish any work using CALTag then please cite the above paper. Here is a BiBTeX entry.

Instructions

  • Use the included utility to generate a pattern description. For example, to make a 9x6 grid with 25.4mm (1-inch) square spacing, use:
    python generate_pattern.py -r 9 -c 6 -s 1 -f mypattern
  • Convert the generated "output.ps" file to pdf, attach it to a flat surface and photograph it.
  • Start Matlab and invoke caltag as follows:
    I = imread( 'photo.jpg' );
    [wPt, iPt] = caltag( I, 'mypattern.mat', false );
    where mypattern.mat is the pattern desciption generated by the first program. This will output two Nx2 matrices of N world (grid) space coordinates and the corresponding N image-space coordinates. Note that Matlab uses 1-based indexing and row-column indexing, whereas images are often addressed using 0-based indexing and x-y coordinates in C.
  • To use the calibration code, do not call CALTag manually. Rather, use the following sequence of commands, which will automatically invoke it:
    cal = zhang_init( 'IMG_*.JPG', 'mycalib.mat' );
    cal = zhang_detectcorners( cal, 'pattern9x6.mat' );
    cal = zhang_calibrate_init( cal, true, true, true, false );
    cal = zhang_calibrate_optimise( cal, false, false, 100 );
    save( 'mycalib.mat', 'cal', '-append' );
    

How does it work?

For full details please see the paper. Briefly, the pattern is constructed such that every square in the grid has a unique binary pattern regardless of its orientation. The detection algorithm then performs some basic image processing to segment out the individual squares. Corner points are estimated by fitting lines through the edges of each square and intersecting them. These points are then refined to subpixel accuracy and used to define a homography mapping the unit square into the image. This enables us to sample the code and check to see if it matches one from the pattern. As long as a few markers are detected in this way, another homography can be fit to the entire grid (using RANSAC) to allow us to locate corner points belonging to squares that were not detected (perhaps due to occlusion). The final output is a set of corresponding grid-space and image-space points.