Image registration of blurred satellite images

The registered sensed image overlaid over the reference one. Intensity values in the overlapped area are calculated as the mean of the corresponding intensity values of the reference and sensed images.

Image registration - the process of overlaying two or more images of the same scene acquired from different viewpoints, by different sensors and/or at different times so that the pixels of the same coordinates in the images correspond to the same part of the scene.

Regardless of the image data involved and of the particular application, image registration usually consists of the four major steps:

Here, the combined invariants are expoilted for the  image registration in the second step, CP matching. They are calculated over a circular neighborhood of each CP candidate detected earlier in the first step. After that, the correspondence is established by matching likelihood coefficients in the space of the invariants. Herein described application uses the combined invariants for  registration of satellite images, that are rotated and shifted one another and differently blurred. In practice, the blurring function is often an unknown composite function describing the degradation effects of the sensor and the atmosphere. Thanks to the invariance of the combined invariants to rotation, translation and also to image blurring by any symmetric PSF, blurred images can be registered directly without any de-blurring. Most of the earlier matching methods fail in such a case.

The experiment was performed on real satellite data with simulated blurring and rotation. The reference image of the size 400x400 pixels was extracted from the SPOT subscene, Czech Republic, band 2.

The sensed image of the size 325x325 pixels was extracted from the different SPOT subscene, band 2, from the same flight covering approximately the same ground. It was then rotated by 15 degrees and the nonideal acquisition was simulated by blurring with the 7x7 averaging mask.

To find CPCs in the both frames, a method developed particularly for detection of corner-like dominant points in blurred images was employed. 30 CPCs selected in the reference and sensed images are depicted in Figures.

The CPC matching has been realized by the following algorithm.

Algorithm Match:

Input: Two sets of CPCs from the sensed and reference images. These sets may contain also some points having no counterparts in the other set.

Invariant vector computation.
A vector of invariants is computed for each CPC over its circular neighborhood of the radius 60 pixels. The vector consist of the following blur-rotation invariants basic combined invariants: (Phi(2,1), Phi(3,0), Phi(5,0), Phi(4,1), Phi(3,2), Phi(7,0), Phi(6,1), Phi(5,2) and Phi(4,3), where p_0 = 2 and q_0 = 1).

CPC correspondence.
The two most-likely matching CPC pairs can be found as the ones with the minimum distance of their invariant vectors. To gain higher robustness, the Matching Likelihood Coefficients can be employed instead of the minimum distance
criterion (as it was done in our experiments). CPCs from the sensed image are transformed using a similarity transform the coefficients of which are calculated by means of the two above mentioned CPC pairs. Correspondence between transformed CPCs from the sensed image and CPCs in the reference image is found via the thresholded nearest neighbor rule in the spatial domain. Knowing the correspondence the set of matched control point (CP) pairs is established.

Improvement of CP localization in the sensed image.
For each CP in the sensed image, its improved position is found in its local neighborhood. For every point from the neighborhood its invariant vector is computed according to Step 1. The point with the minimum distance between
its invariant vector and the invariant vector of the CP counterpart is found and set as the improved position of the CP.

Algorithm Match has several user-defined parameters.