Jupiter in 2016

A couple of nights ago I was lucky enough to have some decent seeing conditions to image Jupiter. I almost missed it, it was rather windy – and freezing cold, weird for winter – and the seeing predictions weren’t that good either, but because I saw other astrophotographers producing nice images, I decided to go outside anyway. I was happy I did.

My first view through the eyepiece saw a relatively stable looking Jupiter with plenty of contrast. Much better than anything I had soon during the past months, if you only see poor seeing you quickly forget what decent seeing looks like. Anyway, that one view through the eyepiece made quickly reach for the electronic equipment, so I could start imaging for the next couple of hours. I was not going to waste this opportunity by just staring at Jupiter through an eyepiece. That would be a huge mistake.

I had been without an equatorial platform for my Dobson for a few weeks, but I was just in time to have a new platform more or less in working condition. After half an hour into imaging I had to rush back inside and fix some manual declination controls I added just hours before the recording session started – basically I had to apply a bit of lock bond to make sure the bolt stayed fixed when I turned the declination control – but other than that I had everything I wanted: decent tracking capabilities and a possibility to correct for my poor polar alignment.

Processing this image all in all took about 5 hours, as I wanted to combine as many images as I could using WinJUPOS. For this image I let WinJUPOS derotate each slightly sharpened stack I had selected to the same reference, and then I manually recombined all of those again in Photoshop, taking only the best (parts of) each stack. I like this semi-manual approach, as it gives you lots of control, and involves lots of fiddling around with image processing software. But it does take forever though. The seeing wasn’t that great really, I only stacked about 30-40% of the frames for each recording, but the transparency was good and being able to use this many stacks really helps to bring out the finer details and contrasts. I had recorded for several hours at a time, but in the end I combined only the best 4 red channels, and 2 green and blue channels each, that all happened to be imaged within about a 25 minute time period of more stable conditions. The other recordings just weren’t anywhere as useful.

Anyway; by far my best Jupiter this season, and I’m quite pleased with the result.

20160314_0006_Jupiter_SuperRGB_K4_improved

Mars

Here is my latest result: Mars imaged on May 17 under pretty good conditions (well, for Dutch conditions of course). I still have some recordings left to process from the night before, and I’m really curious to see the changes in the results from one day to the next as the clouds on Mars can be pretty dynamic!

20140517_2204UTC_Mars_RGB_rel

For this recording I used the ASI120MM monochrome camera (as usual) and some Baader color filters. Processing the data took quite some time: out of 50 recordings I wanted to select only the most promising ones to then further process using AutoStakkert!2. To this end, I first let AS!2 quickly batch-process all recordings using just a single alignment point and then manually previewed and selected the resulting stacks. The best recordings were then processed more carefully using AS!2. This is actually an often used ‘trick’ by planetary astrophotographers: not only do we perform lucky imaging within a recording, by letting AS!2 select and combine only the sharpest portions of frames, but also between recordings, where we basically select (or show!) only the best recordings out of hours of material.

Mars rotates once in about 25 hours, but if you record using a monochrome camera every color after each other, there is a small color displacement between the channels when you turn the images into a color image. Because the recordings I made were quite lengthy, and especially far apart in time, I used WinJUPOS to compensate for the planet rotation by derotating the stacked images. Using this technique, it is even possible to combine multiple recordings together to increase the signal to noise ratio (produce prettier images), or even get rid of some sharpening or diffraction related artifacts in high-contrast areas.

Anyway, lots of cool stuff can be seen in this image of Mars. The little dark dot on the center left for example is a large shield volcano Ascraeus Mons peaking through some water ice clouds. Also clearly visible is the north polar cap consisting mostly of water ice during summer.

Jupiter

Finally some good seeing!

The image was made with a 16″ F/5 Dobson, Baader RGB filters, and an ASI120MM camera operating at about F/15. Of course the image was processed in AutoStakkert!2. Post-processing with Photoshop and WinJUPOS, to derotate the stacks.

Jupiter

The Orion Nebula 1/2

Occasionally, I’m going to use this blog to show some of the images I made. When I do that, I will try to tell just a little bit about how the image was made, and what it actually is that you are looking at. Today I will discuss an image of M42 that I made over a year ago on the night of November 18-19 2012. I planned to post this story soon after the images were taken, but for some reason that never happened…. Anyway, here is the first part of the rather long story that will probably be completed in a year from now…

I did not want to go outside, because I had to get up early the next morning and was already tired. But astrophotographers are weird people, especially when they live in areas where the number of good clear skies per year is astronomically (ha!) low. Whenever there is an opportunity to observe the night sky, you take it, even if you don’t really want to. Because before you know it another month has passed without any clear skies, and you would just feel bad about not imaging when you could have.

So I went outside. It was cold. It was also slightly foggy, which I did not like, because it meant the air was probably steady. And that in turn meant the images would probably be good, and that I also was not going to get much sleep this night. So I set up my telescope, connected the remote controller of the focuser, turned on the ventilator at the back of the telescope tube to force a temperature equilibrium, checked and corrected the alignment of the optics, and pointed the telescope to Jupiter. The Moons of Jupiter looked very steady: the seeing was good. Tomorrow I was going to be tired.

I pretty much never look through the telescope myself, but let a camera do that for me. This way I can see many more details, and share the image with other people as well. So I powered up my laptop, added the filter wheel, barlow, and camera, turned off the ventilator again because it can cause slight vibrations, and then started imaging Jupiter for the next three hours. The Orion Nebula was still too low at the horizon, so it did not make much sense to start imaging that, but generally I also prefer imaging planets as they are more dynamic. The planets also require more magnification, which means that you can only see them in high detail when the seeing is very good. The seeing was very good, so to me it was obvious I should image a planet, and the biggest one available this night was Jupiter.

But as this post is actually about the Orion Nebula, let’s fast forward to around 2 AM. It was still cold: there was a layer of ice on my telescope, I had to defog the secondary mirror a couple of times with a hair dryer, and my fingers were freezing. The seeing was slowly getting worse as well, so I was pretty much done with Jupiter. And then I noticed Orion, and in particular the fuzzy spot at the center of three stars making up the sword of the Hunter, close to the larger structure of the three stars making up its belt. That is where we can find the Orion Nebula.

nebula As you probably know, everything in space is huge. Even Jupiter – whose light ‘only’ takes about 40 minutes to get here – is already enormous: Earth easily fits into the hundreds of years lasting storm on Jupiter – the Great Red Spot – and we could place more than half a million Earths on a straight line from here to Jupiter. In the Orion Nebula light takes about 24 years (!) to get from one side to the other, which means that if every human being on earth had its own Earth-sized planet on a straight line inside the Orion Nebula, there would be room for 27 thousand Earths… for each of us. The Nebula itself is relatively closeby: light only takes 1200 years to get from there to here, our closest neighboring galaxy is 200 thousand times further away. Anyway, you get the picture, there is plenty of space in space.

But when we zoom in a little bit on just the center of the Orion Nebula, this is what we see:

The bright center of the Orion Nebula. The brightest stars seem like just one star when viewed with the unaided eye.

This image was made during the night in question with my 0.25 m Newton telescope. Of course, it does not come close to what Hubble can see when staring at M42. But for Hubble it is relatively easy: it has a huge 2.4 meter mirror that can collect light about 92 times faster (!) than my telescope can, and can resolve details that are at the very least 10 times smaller. But Hubble is also floating in space which means that it does not have to worry about the Earths atmosphere which has a tendency to distort images, especially when trying to view really tiny details, and even more so when using long exposure times. Because the longer you expose an image, the more our atmosphere has the chance to distort it.

Unfortunately, this is where the story ends for now. Keep checking this blog for a follow up! For now I’ll end with two close-ups of the image posted above. You can actually see protoplanetary disks here: disks of dense gas surrounding stars that have basically just been formed!

Proplyds, or externally illuminated photo-evaporating protoplanetary disks in the core of the trapezium (the slightly elongated stars dots right next to the brightest star in the center). These are young stars that have just been born.
Another proplyd in the center of the field of view.

Enhance!

EnhancedImageEvery now and then there is a dramatic TV show on where the good guys are ‘digitally enhancing’ a few pixels to create an ultra-sharp image of the face of a villain. The bad guy is recognized, he is caught, and the world is again safe. Of course we all know this is impossible. There were only a few pixels to work with, it can’t be done.

Right?

car
Single frame of a car with an unreadable license plate and brand name

Well, it is indeed impossible if you only had those few pixels to work with, but if you have more than one image of the same target, either from slightly different viewpoints or taken over time with a slight offset in the images, it turns out it actually is possible. It is only possible though if the images are both under-sampled and you can determine the offset of the target with sub-pixel accuracy.

If the images are not under-sampled – like it is the case for most high resolution astrophotography images – the first frame would contain the same information as any of the other frames. Apart from a little bit of noise of course. Stacking the images would certainly increase the signal to noise ratio, and you can even reject frames that are too blurry or compensate for the movement a bit, but you will never be able to go beyond the diffraction limit of the optics. You will need a larger aperture to get more detail.

If you can’t determine the offset of the target in the image, then even if it did contain extra information, we wouldn’t know where to place it! Luckily, accurately determining offset in images is no problem at all for AutoStakkert!2. If you do have under-sampled images to play around with, the fun can begin. I found an interesting data-set of a tiny video containing several frames of a moving car. One of these images is shown above. It is enlarged to 300% to show the individual pixels it was made up from.

Now let’s use AutoStakkert!2 to stack twenty of these images from a small sequence where the car is moving through the field, and sharpen the results a little bit.

We can read the license plate, and  the brand of the car. The effective resolution has increased significantly.
We can read the license plate, and the brand of the car. The effective resolution has increased significantly by using Super Resolution techniques in AS!2 on multiple images of the same target.

All of a suddon we can read the license plate, and even the brand at the back of the car! We actually cleaned up the image and enhanced it. Super Resolution does indeed give you super resolution.

AutoStakkert!2 uses an advanced technique called drizzling – officially known as Variable Pixel Linear Reconstruction – which was originally developed for the hubble telescope to achieve sharper results for undersampled images. Drizzling was applied to several tiny sections in the images to compensate for any image distortions. Combine this with an accurate estimation of the location of the features throughout the image, and you can end up with a lot more resolution than you started with even when the field of view is changing.

Apart from making Hubble images sharper, Super Resolution is also applied to telescopes actually peaking down to earth. Some might find it interesting to see an enemy tank or structure when it was hardly visible in a single image. More down to earth implications are to actually do what we did here: read license plates of speeding cars, or indeed to recognize the bad guys in a video of a robbery. Unless the bad guys wore masks of course.

To sum things up: Super Resolution is real. If you have just one image containing a few pixels there is little you can do. But if you have a lot of slightly different and under-sampled versions of those pixels, then you can significantly increase the resolution of your images! For planetary astrophotography this is hardly ever the case however. Sometimes drizzling can give sharper results for low focal length recordings: when imaging the Sun in good seeing conditions at low magnifications for example. For short exposures of deepsky targets at lower focal lengths there is a much bigger chance it will actually increase the effective resolution. For most planetary recordings there simply is little to gain by drizzling.

From left to right: original image. 10 frames using the MAP-uHMT algorithm, and AutoStakkert!2 results using 20 frames.Notice the huge increase in resolution!
Left: original image. Center: 10 frames using the MAP-uHMT algorithm.Right: AutoStakkert!2.2.0.10 using 20 frames. Notice the the higher amount of detail in the super resolution image made by AS!2.

AutoStakkert!2 does not use the MAP-uHMT method shown in the image above. The MAP-uHMT technique was developed by Dr. Feng Li at the University of New South Wales. AutoStakkert!2 only produces raw stacks, and to correct for residual image blurring these stacks have been manually sharpened in Photoshop using the smart-sharpening tool. Better results can likely be obtained when using more advanced deconvolution methods to get rid of residual image blurring.