Every now and then there is a dramatic TV show on where the good guys are ‘digitally enhancing’ a few pixels to create an ultra-sharp image of the face of a villain. The bad guy is recognized, he is caught, and the world is again safe. Of course we all know this is impossible. There were only a few pixels to work with, it can’t be done.
Single frame of a car with an unreadable license plate and brand name
Well, it is indeed impossible if you only had those few pixels to work with, but if you have more than one image of the same target, either from slightly different viewpoints or taken over time with a slight offset in the images, it turns out it actually is possible. It is only possible though if the images are both under-sampled and you can determine the offset of the target with sub-pixel accuracy.
If the images are not under-sampled – like it is the case for most high resolution astrophotography images – the first frame would contain the same information as any of the other frames. Apart from a little bit of noise of course. Stacking the images would certainly increase the signal to noise ratio, and you can even reject frames that are too blurry or compensate for the movement a bit, but you will never be able to go beyond the diffraction limit of the optics. You will need a larger aperture to get more detail.
If you can’t determine the offset of the target in the image, then even if it did contain extra information, we wouldn’t know where to place it! Luckily, accurately determining offset in images is no problem at all for AutoStakkert!2. If you do have under-sampled images to play around with, the fun can begin. I found an interesting data-set of a tiny video containing several frames of a moving car. One of these images is shown above. It is enlarged to 300% to show the individual pixels it was made up from.
Now let’s use AutoStakkert!2 to stack twenty of these images from a small sequence where the car is moving through the field, and sharpen the results a little bit.
We can read the license plate, and the brand of the car. The effective resolution has increased significantly by using Super Resolution techniques in AS!2 on multiple images of the same target.
All of a suddon we can read the license plate, and even the brand at the back of the car! We actually cleaned up the image and enhanced it. Super Resolution does indeed give you super resolution.
AutoStakkert!2 uses an advanced technique called drizzling – officially known as Variable Pixel Linear Reconstruction – which was originally developed for the hubble telescope to achieve sharper results for undersampled images. Drizzling was applied to several tiny sections in the images to compensate for any image distortions. Combine this with an accurate estimation of the location of the features throughout the image, and you can end up with a lot more resolution than you started with even when the field of view is changing.
Apart from making Hubble images sharper, Super Resolution is also applied to telescopes actually peaking down to earth. Some might find it interesting to see an enemy tank or structure when it was hardly visible in a single image. More down to earth implications are to actually do what we did here: read license plates of speeding cars, or indeed to recognize the bad guys in a video of a robbery. Unless the bad guys wore masks of course.
To sum things up: Super Resolution is real. If you have just one image containing a few pixels there is little you can do. But if you have a lot of slightly different and under-sampled versions of those pixels, then you can significantly increase the resolution of your images! For planetary astrophotography this is hardly ever the case however. Sometimes drizzling can give sharper results for low focal length recordings: when imaging the Sun in good seeing conditions at low magnifications for example. For short exposures of deepsky targets at lower focal lengths there is a much bigger chance it will actually increase the effective resolution. For most planetary recordings there simply is little to gain by drizzling.
Left: original image. Center: 10 frames using the MAP-uHMT algorithm.Right: AutoStakkert!22.214.171.124 using 20 frames. Notice the the higher amount of detail in the super resolution image made by AS!2.
AutoStakkert!2 does not use the MAP-uHMT method shown in the image above. The MAP-uHMT technique was developed by Dr. Feng Li at the University of New South Wales. AutoStakkert!2 only produces raw stacks, and to correct for residual image blurring these stacks have been manually sharpened in Photoshop using the smart-sharpening tool. Better results can likely be obtained when using more advanced deconvolution methods to get rid of residual image blurring.