The Orion Nebula 1/2

Occasionally, I’m going to use this blog to show some of the images I made. When I do that, I will try to tell just a little bit about how the image was made, and what it actually is that you are looking at. Today I will discuss an image of M42 that I made over a year ago on the night of November 18-19 2012. I planned to post this story soon after the images were taken, but for some reason that never happened…. Anyway, here is the first part of the rather long story that will probably be completed in a year from now…

I did not want to go outside, because I had to get up early the next morning and was already tired. But astrophotographers are weird people, especially when they live in areas where the number of good clear skies per year is astronomically (ha!) low. Whenever there is an opportunity to observe the night sky, you take it, even if you don’t really want to. Because before you know it another month has passed without any clear skies, and you would just feel bad about not imaging when you could have.

So I went outside. It was cold. It was also slightly foggy, which I did not like, because it meant the air was probably steady. And that in turn meant the images would probably be good, and that I also was not going to get much sleep this night. So I set up my telescope, connected the remote controller of the focuser, turned on the ventilator at the back of the telescope tube to force a temperature equilibrium, checked and corrected the alignment of the optics, and pointed the telescope to Jupiter. The Moons of Jupiter looked very steady: the seeing was good. Tomorrow I was going to be tired.

I pretty much never look through the telescope myself, but let a camera do that for me. This way I can see many more details, and share the image with other people as well. So I powered up my laptop, added the filter wheel, barlow, and camera, turned off the ventilator again because it can cause slight vibrations, and then started imaging Jupiter for the next three hours. The Orion Nebula was still too low at the horizon, so it did not make much sense to start imaging that, but generally I also prefer imaging planets as they are more dynamic. The planets also require more magnification, which means that you can only see them in high detail when the seeing is very good. The seeing was very good, so to me it was obvious I should image a planet, and the biggest one available this night was Jupiter.

But as this post is actually about the Orion Nebula, let’s fast forward to around 2 AM. It was still cold: there was a layer of ice on my telescope, I had to defog the secondary mirror a couple of times with a hair dryer, and my fingers were freezing. The seeing was slowly getting worse as well, so I was pretty much done with Jupiter. And then I noticed Orion, and in particular the fuzzy spot at the center of three stars making up the sword of the Hunter, close to the larger structure of the three stars making up its belt. That is where we can find the Orion Nebula.

nebula As you probably know, everything in space is huge. Even Jupiter – whose light ‘only’ takes about 40 minutes to get here – is already enormous: Earth easily fits into the hundreds of years lasting storm on Jupiter – the Great Red Spot – and we could place more than half a million Earths on a straight line from here to Jupiter. In the Orion Nebula light takes about 24 years (!) to get from one side to the other, which means that if every human being on earth had its own Earth-sized planet on a straight line inside the Orion Nebula, there would be room for 27 thousand Earths… for each of us. The Nebula itself is relatively closeby: light only takes 1200 years to get from there to here, our closest neighboring galaxy is 200 thousand times further away. Anyway, you get the picture, there is plenty of space in space.

But when we zoom in a little bit on just the center of the Orion Nebula, this is what we see:

The bright center of the Orion Nebula. The brightest stars seem like just one star when viewed with the unaided eye.

This image was made during the night in question with my 0.25 m Newton telescope. Of course, it does not come close to what Hubble can see when staring at M42. But for Hubble it is relatively easy: it has a huge 2.4 meter mirror that can collect light about 92 times faster (!) than my telescope can, and can resolve details that are at the very least 10 times smaller. But Hubble is also floating in space which means that it does not have to worry about the Earths atmosphere which has a tendency to distort images, especially when trying to view really tiny details, and even more so when using long exposure times. Because the longer you expose an image, the more our atmosphere has the chance to distort it.

Unfortunately, this is where the story ends for now. Keep checking this blog for a follow up! For now I’ll end with two close-ups of the image posted above. You can actually see protoplanetary disks here: disks of dense gas surrounding stars that have basically just been formed!

Proplyds, or externally illuminated photo-evaporating protoplanetary disks in the core of the trapezium (the slightly elongated stars dots right next to the brightest star in the center). These are young stars that have just been born.
Another proplyd in the center of the field of view.

Enhance!

EnhancedImageEvery now and then there is a dramatic TV show on where the good guys are ‘digitally enhancing’ a few pixels to create an ultra-sharp image of the face of a villain. The bad guy is recognized, he is caught, and the world is again safe. Of course we all know this is impossible. There were only a few pixels to work with, it can’t be done.

Right?

car
Single frame of a car with an unreadable license plate and brand name

Well, it is indeed impossible if you only had those few pixels to work with, but if you have more than one image of the same target, either from slightly different viewpoints or taken over time with a slight offset in the images, it turns out it actually is possible. It is only possible though if the images are both under-sampled and you can determine the offset of the target with sub-pixel accuracy.

If the images are not under-sampled – like it is the case for most high resolution astrophotography images – the first frame would contain the same information as any of the other frames. Apart from a little bit of noise of course. Stacking the images would certainly increase the signal to noise ratio, and you can even reject frames that are too blurry or compensate for the movement a bit, but you will never be able to go beyond the diffraction limit of the optics. You will need a larger aperture to get more detail.

If you can’t determine the offset of the target in the image, then even if it did contain extra information, we wouldn’t know where to place it! Luckily, accurately determining offset in images is no problem at all for AutoStakkert!2. If you do have under-sampled images to play around with, the fun can begin. I found an interesting data-set of a tiny video containing several frames of a moving car. One of these images is shown above. It is enlarged to 300% to show the individual pixels it was made up from.

Now let’s use AutoStakkert!2 to stack twenty of these images from a small sequence where the car is moving through the field, and sharpen the results a little bit.

We can read the license plate, and  the brand of the car. The effective resolution has increased significantly.
We can read the license plate, and the brand of the car. The effective resolution has increased significantly by using Super Resolution techniques in AS!2 on multiple images of the same target.

All of a suddon we can read the license plate, and even the brand at the back of the car! We actually cleaned up the image and enhanced it. Super Resolution does indeed give you super resolution.

AutoStakkert!2 uses an advanced technique called drizzling – officially known as Variable Pixel Linear Reconstruction – which was originally developed for the hubble telescope to achieve sharper results for undersampled images. Drizzling was applied to several tiny sections in the images to compensate for any image distortions. Combine this with an accurate estimation of the location of the features throughout the image, and you can end up with a lot more resolution than you started with even when the field of view is changing.

Apart from making Hubble images sharper, Super Resolution is also applied to telescopes actually peaking down to earth. Some might find it interesting to see an enemy tank or structure when it was hardly visible in a single image. More down to earth implications are to actually do what we did here: read license plates of speeding cars, or indeed to recognize the bad guys in a video of a robbery. Unless the bad guys wore masks of course.

To sum things up: Super Resolution is real. If you have just one image containing a few pixels there is little you can do. But if you have a lot of slightly different and under-sampled versions of those pixels, then you can significantly increase the resolution of your images! For planetary astrophotography this is hardly ever the case however. Sometimes drizzling can give sharper results for low focal length recordings: when imaging the Sun in good seeing conditions at low magnifications for example. For short exposures of deepsky targets at lower focal lengths there is a much bigger chance it will actually increase the effective resolution. For most planetary recordings there simply is little to gain by drizzling.

From left to right: original image. 10 frames using the MAP-uHMT algorithm, and AutoStakkert!2 results using 20 frames.Notice the huge increase in resolution!
Left: original image. Center: 10 frames using the MAP-uHMT algorithm.Right: AutoStakkert!2.2.0.10 using 20 frames. Notice the the higher amount of detail in the super resolution image made by AS!2.

AutoStakkert!2 does not use the MAP-uHMT method shown in the image above. The MAP-uHMT technique was developed by Dr. Feng Li at the University of New South Wales. AutoStakkert!2 only produces raw stacks, and to correct for residual image blurring these stacks have been manually sharpened in Photoshop using the smart-sharpening tool. Better results can likely be obtained when using more advanced deconvolution methods to get rid of residual image blurring.