Jeremy Likness
Jeremy Likness Astrophotographer Jeremy Likness has always been interested in the mysteries of deep space, but only recently began to explore them through astrophotography.
šŸ“…February 20, 2022 šŸ’¬4643 words āŒ›18 minutes to read

To the Moon! Processing lunar images

To the Moon! Processing lunar images

To date, Iā€™ve taken perhaps thousands of images of the moon. Some are dramatic like the cloudy moon, some are detailed like this crescent moon and others are close-up.


I take pictures with my phone, mirrorless camera, Stellina observation station and a doublet refactor. In all cases, I follow a similar set of steps that lead to the final image. These steps are, at a high level:

  1. Acquisition - the set-up and capture of the images
  2. Pre-processing - organizing and filtering the images
  3. Stacking - accumulation of multiple images to increase the detail and signal
  4. Post-processing - tweaks to finish the composition

Most of my workflows involve 100% free software.

šŸ’” TIP Although this post is focused on the moon, the same concepts apply for most planetary photography as well.

Also, as an Amazon affiliate I earn from qualifying purchases made from links in this post. The money helps me fund this website and production. I only link to products Iā€™ve personally purchased and used.

OK, letā€™s break down the approaches.

Acquisition

Acquisition is the planning stage and involves several steps. Questions to ask include:

  1. When will the moon be visible?
  2. What phase will it be in?
  3. Is it in conjunction with any planets?
  4. What composition am I going for?

Letā€™s answer these questions with some free software.

Stellarium

Stellarium is one of my most commonly used pieces of astrophotography software. The reason is simple: it allows me to look at the stars even when itā€™s cloudy outside! It has a lot of great features built-in. For example, you can choose to view the sky as it will be at a future date. Here, I can see the moon will be fairly high in the sky at 3am tomorrow morning.


What I really love is the ability to enter information about your equipment. For example, this is my set of entries for my Svbony doublet refractor with a ZWO ASI294MC Pro camera.



This configuration then lets me check ā€œframingā€ or how the image will appear. This is the moon tomorrow morning as framed by different pieces of equipment I have.




As you can see, I have a few options!

Barlow lenses

Itā€™s worthwhile to mention a quick note about Barlow lenses. These are special lenses you can add to your setup that will magnify the image more. Youā€™ll often hear people say you lose a lot of light, but modern lenses are highly efficient. The lenses come with an optimal magnification. Mine is 2x. What some people donā€™t realize is that, independent of focus, magnification is a function of how far the sensor (or your eye) is from the lens. For example, the extreme close-up shots I took of the moon were with a 2x Barlow. I intentinally used spacers to distance my camera sensor from the Barlow lens so I could get extra magnification. My best guess is that I achieved about 5x.

Telescopius

Another great tool is the Telescopius website. Here you can search ā€œmoonā€ and find a chart of phases as well as a plot to see what height it will be by time of day and which direction. The tool is good but I prefer the site for more deep sky objects (DSOs) than planetary targets.

It also allows you to enter lens and sensor information to generate previews.

Astrospheric

Of course, knowing where the moon will be is only half the battle. The other is anticipating weather. Fortunately, there is something better than your average phone weather app. You want precision. Iā€™ve spent hours setting up a rig under cloudy skies just because the forecast showed a window of a few hours to image! Thatā€™s because I use a free service called Astrospheric that gives you everything by the hour:

  1. Cloud level
  2. Seeing conditions
  3. Transparency
  4. Sunrise/sunset
  5. Moonrise/moonset and phase
  6. ISS flyby schedule
  7. Dew point (not just to stay dry, but helps predict fog)

Hereā€™s an example forecast.


I marked in red the spot that immediately caught my eye. I was almost ready to shout out loud in joy when my eyes scanned down to the area I circled in yellow. Just as Iā€™m dismissing the idea out of simple survival instinct, that other voice says, ā€œWow, your new telescope should be here Wednesday and, gee, we havenā€™t had the chance to really test the rig in 14 degree weather!ā€ Oh, and hey, I just noticed the dew point and itā€™s going to be a dry and fog-free night, soā€¦ yeah, itā€™s an addictive hobby.

Phases

Just a quick note on phases: the phase of the moon can dramatically alter the type of image you are able capture. A crescent moon means the angle of the sun casts long shadows from the ridges of craters and mountains, so the detail is in high contrast. A full moon, however, receives direct sun and casts fewer shadows, so you end up with more contrast due to different minerals rather than terrain. Iā€™m planning a project to assemble a full moon from phased moons to compare against a direct full-moon shot.

Monochrome or one-shot color

There is a lot of debate around whatā€™s best for imaging when considering camera sensors. Personally, Iā€™ve seen great photos from both sources so there is no debate. You can grab a one shot color with good resolution, pixel size and speed and capture stunning images. However, it is true that no camera sees color. In the case of a one shot, you really just have a bunch of small filters covering individual sensors so the intensity of that color channel is measured. You use more of the sensor if you just use every cell and place a single filter over the whole thing. This requires swapping out filters or using a filter wheel, potentially refocusing for each filter, and therefore is a more complicated approach.

Iā€™m still waiting to test drive my multi-filter setup with a monochrome camera. My other images are either in color, or taken with Stellina that images the moon in grayscale. It may be beneficial to use a filter from light-polluted areas because good filters will block out some of the wavelengths known to make up light pollution. Iā€™ll let you know how it goes when I have an evening or morning I can both see and survive.

Focal length and magnification

The framing step is important because it helps visualize the appropriate focal length and sensor size to take the picture you want. The typical camera range of 50mm-200mm is great for landscape shots that feature the moon, while 400mm gets closer to drawing out detail in a close-up of the disk. 1000mm and longer are in the territory of surface detail and targeting specific craters, ridges, and even landing sights. You can even piggyback equipment and take shots at different focal lengths to find what works best.

Exposure and gain

When you look at the moon or a planet, you are observing reflected sunlight. The moon can be extremely bright and defies traditional night time settings. When I target the moon, I typically set my ISO/gain to 0, back off the stops to partially close the aperture and let less light in, then expose at millisecond or microsecond intervals. Most software comes with a live preview mode and some even have algorithms to automatically adjust exposure and gain to take the ideal photograph.

Lucky astrophotography

I learned about lucky astrophotography after I wrote my first articles on planetary imaging. The telescope I was using at the time was already doing it. Conceptually, itā€™s easy to think of filming videos and taking pictures as two entirely different processes, but technically, they are the same. A video is just a rapid accumulation of photos taken one after the other. Using video mode can be easier, however, because you can worry about framing, lighting, and focus and tweak these settings in real-time while your camera worries about taking pictures.

Lucky astrophotography refers to taking a lot of captures, usually with video, then processing only the best to get your ā€œlucky shot.ā€ More captures means more likelihood of having quality photos in the batch, and more exposures means more signal and less noise. I can easily capture thousands of frames in just a few minutes of filming a target like the moon.

Donā€™t worry, these processes work equally well on individual photos as they do film.

šŸ’” TIP Some software will save captures in a special astrophotography format called SER or the ā€œLucam video format.ā€ This is a special format that stores metadata about each frame in the file that is useful to image processing programs. You can play these files back by installing a free SER player.

So, how do I capture my images? There are four pieces of software, all free, that I may use.

Imaging Edge Desktop

Imaging Edge Desktop is free software specifically designed for owners of Sony mirrorless cameras. There are three main parts to the software. The remote application provides full remote control of your camera, including all settings a modes needed to shoot your targets. While itā€™s really great is the live view that is zoommable so that you can easily frame in focus on your targets. The viewer not only makes it easy to review your photographs, but also has some built-in utilities, such as creating a time-lapse movie from a set of exposures. The editor is unique to the camera with capabilities like turning on dynamic range settings, managing lens distortion, correcting for over- and under-exposure, and more. Although I occasionally take photographs, the most common approach I take to imaging the moon is to record video.

šŸ’” TIP If you plan to create a time-lapse movie, one technique you can use is to open a single frame in the editor and tweak settings like saturation and contrast so that the image is clearer to see. You can save the settings in an XML document. In the viewer, when you highlight the time-lapse option you can refer to the settings document so that each frame is modified before being inserted into the movie. This can produce some amazing results.

ASICap

ASI Studio is another application specific to a line of cameras. It is available for ZWOā€™s line of ASI cameras. There are several applications in the suite, including a great (and fast) FIT file viewer, but the main one I use for imaging the moon is called ASI Cap. Not only does the software easily connect to the cameras and automatically set exposure and gain, it also connects to my electronic filter wheel. This makes it easy to plan color sessions because I can set up a sequence that moves through each filter as part of the capture. I can set the output to individual images, an AVI or an SER file. This is my preferred software to use for lunar and planetary capture.

SharpCap

SharpCap is a very versatile and specialized software application. I like to think of it as astrophotography algorithms all rolled into a single program. I use SharpCap to ensure I get the best focus possible for my images, and also for polar alignment. It will also analyze your camera sensors and help you determine the best exposure and gain for an observation session. These are all pro features but after using the software for just a few days I knew it was worth investing in the pro version. It connects to most cameras, has an automatic exposure and gain feature, and gives you full control over the settings you wish to capture. It will even connect to and control your mount. Although I donā€™t use it for capture, I know many photographers who do everything they need from within this app.

N.I.N.A.

Nighttime Imaging ā€˜Nā€™ Astronomy (N.I.N.A.) is a free and open source application designed to handle everything you need for an imaging session. I use it exclusively when capturing deep sky objects. It works with various cameras, mounts, filter wheels and even automated dome systems. Some of the really powerful features include being able to pull a target from Dtellarium, then take picture with the camera and figure out the angle of rotation so that you can perfectly frame and center your target for imaging. It has an advanced scripting feature for automating your image captures including changing filters and exposures, slewing your telescope to the target ensuring it is properly centered, and handling things like flats and darks.

Because I donā€™t need precise framing for planetary and lunar targets, I prefer to use ASI Cap for those sessions, but I use N.I.N.A. for everything else.

Pre-processing

Iā€™ve captured my moon images and have three files to deal with:

  1. A 41-second .mp4 movie of the moon from my Sony Alpha 6300
  2. 101 grayscale .jpg images of a crescent moon from Stellina
  3. An extremely turbulent close-up image of the moonā€™s surface from my Svbony refractor with an ASI camera. This is a SER file format and takes up about 3 gigabytes of disk space.

This video shows some of the raw footage. For this blog post, Iā€™ll focus on the first part of the video that pans over the Copernicus crater.

Here is a screenshot of several Stellina images. Notice how the moon is in different positions within the frame. Pre-processing will address that.


There are two ways I pre-process images. First, letā€™s explore the tool created specifically for the job.

ā€œBlinkā€ is the name of a process in the not free PixInsight application that I use for most of my post-processing. You can ā€œblinkā€ without it. The manual way to approach pre-processing is to scan your images and toss the ones that are cropped, blurry, deformed or just not as crisp. For the remaining images, you might want to scale and crop them so they are all roughly the same size and in the same area of the frame. There are ways to automate this, but I prefer to use a piece of software designed specifically for the job: PIPP.

Planetary Imaging Pre-processor

My main ā€œgo toā€ for planetary and lunar pre-processing is the venerable and free Planetary Imaging PreProcessor (PIPP). It can take either videos (including the SER format) or individual images. You specify the type of image youā€™re after (such as the full planet, a close-up of its surface, or an animation) and tweak some other controls, and PIPP goes to work prepping your images. Letā€™s take a look using our three sources.

Full moon: camera video

After launching PIPP, I simply drag my video onto the app and it immediately pops up an ā€œoutput frame.ā€ This is a preview of what to expect. You can see info about the file as well. Notice how small and off center the image of the moon is.


The first tab has a dialog ā€œoptimise options forā€¦ā€ and I choose ā€œSolar/Lunar Full Disc.ā€ This will change some defaults for me. I leave the ā€œInput Optionsā€ as is and move on to ā€œProcessing Options.ā€ Here there is a bit of work to do. Because I filmed in color, I uncheck ā€œConvert Colour to Monochrome.ā€ Under ā€œObject detectionā€ I click ā€œTest Detect Thresholdā€ and make sure the moon, and only the moon, is highlighted in red.


If the moon is not completely red, or if the red bleeds into the area around the moon, Iā€™ll uncheck ā€œAuto Object Detection Thresholdā€ and tweak the value until it detects it properly.

Under cropping, I tweak the sizes and click the ā€œTest Optionsā€ button until I like what I see. Here Iā€™ve settled on 800 pixels by 800 pixels. Notice how the moon is now large and centered.


On the ā€œQuality Optionsā€ tab I usually check ā€œOnly Keep the Best Quality Framesā€ and for this Iā€™ll set it to 500 (there are over 2,000 in the source file). I skip the ā€œAnimationā€ tab and go to ā€œOutput Options.ā€ I prefer to have a single file so I usually output as an AVI unless I know my stacking software only accepts images. Weā€™re going to stack this with AutoStakkert! so an AVI is fine.

Now all thatā€™s left is to go to the ā€œDo Processingā€ tab and click ā€œStart Processing.ā€ Less than five minutes later, I have an AVI that is about 16 seconds long (compared to the original 41) with the moon large and centered. Weā€™ll stack this in a minute, but first letā€™s pre-process our other files.

Full moon: Stellina captures

For the Stellina captures, the process is similar with a few changes. Instead of dragging a movie file onto the PIPP app, I drag the individual image files or use the dialog in the application. Note that the app is old and thinks it doesnā€™t recognize jpeg files. A quick rename to jpg makes it right as rain. I have fewer frames so on the ā€œqualityā€ tab I make sure that ā€œOnly keep the best quality framesā€ is not checked. For ā€œOutput Optionsā€ Iā€™ll be stacking with software that doesnā€™t recognize movie formats, so I choose TIFF as the output file format.

I click ā€œStart Processingā€ and in just a few seconds have a directory neatly populated with cropped and centered crescent moons.

Terrain: Copernicus crater

Terrain is a little more involved, especially because my video didnā€™t focus on the target area but panned across the moon surface. On the main tab, I check ā€œSolar/Lunar Close-Upā€ for my settings. The next thing I do is choose ā€œLimit frame rangeā€ on ā€œInput Optionsā€ and tweak the start frame until I can see the crater. On the ā€œProcessing Optionsā€ tab, the ā€œAnchor Feature Boxā€ has been checked along with ā€œReject frames without Anchor feature.ā€ When I click ā€œTest Optionsā€ a red box appears that I can place over the crater for tracking. I picked a few surrounding caters too.


I leave the output option as an AVI. I click ā€œstart processingā€ and a few minutes later have a video showing all of the frames that entirely contain my anchor feature box. It turns out to be 327 frames.

Stacking

Now our moon images are preprocessed and ready to stack. For the observations that captured the entire moon, stacking raises signal and reduces noise to provide a crisp, sharp image to finish processing with. For the terrain video, stacking averages the points over multiple frames to reverse the wobbly effects of vibrations and turbulance to provide a clean final image. I usually use Autostakkert! but will resort to the other methods on the rare occasion I get a bad result from the stack.

Autostakkert!

Another older but effective (and free) tool, Autostakkert! is optimized to stack planetary and lunar images based on features the same way other software uses stars to align deep sky objects. The UI may seem a little outdated and confusion, but the app literally walks you through the process step-by-step.

First, we open the source. For this example, I choose the AVI file that was processed from my camera video. A new window pops up with a single frame. The purpose of this step is to draw boxes around features that will help align the various frames together for stacking.


You can click on areas to draw boxes, but itā€™s easier to simply click the radio button next to the number of points desired (Iā€™ll pick 200) and then click ā€œplace AP grid.ā€ This will automatically generate the alignments and coverage looks good.


In the original window, you can now click ā€œAnalyseā€ to get a quality graph. I already trimmed the bad images pre-processsing so the graph looks good. Iā€™ll stack 75% of the remaining images and normalize the stack. Because the image is so small, Iā€™m going to beef it up using a 3x drizzle. This will work well because I used video that moved around to shift or ā€œditherā€ the pixels. Here is my dialog just before clicking ā€œStack.ā€


After stacking, a new file is generated that is ready for processing. Here is a comparison with a single frame from the camera on top and the stacked version on the bottom.


Can you see a difference in the clarity and sharpness of detail?

PixInsight

PixInsight is paid software that is in my opinion the ultimate photo processing tool. Iā€™m a huge fan and use it daily. Iā€™ve even built and contributed a set of DeepSkyWorkflow scripts that help me fix out-of-whack colors, generate support files to use deconvultion and fix distorted stars, and even help reduce noise. Sometimes the stacking tools I use donā€™t work, and when that happens I usually switch over to use a specifc PixInsight script called ā€œFast Fourier Transformation Registrationā€ or ā€œFFTRegistrationā€ on the Script => Utilities menu.

PPIP sorted images from highest to lowest quality, so I can pick the first image output by PPIP as the reference image. I also click ā€œcorrect for rotationā€ but leave the other options on their defaults. I tap ā€œAddā€ and select all of the output moon files, including the one I used for a reference frame. Clicking OK kicks off a registration process that takes about 5 - 10 minutes on my machine and generates the stacked crescent moon.

RegiStax

AutoStakkert! can handle terrain just fine, but thereā€™s another free application that many users swear by so Iā€™m sharing it here. Itā€™s been aroud a minute or two (check out the RegiStax6 website). The interface is very similar to AutoStakkert! Iā€™ll go ahead and drag the moon onto the app. Here, the steps arenā€™t as intuitive, but next Iā€™ll tap ā€œSet Alignpointsā€. The result missed a few craters, so I add them by clicking on points in the photograph.


When I tap align, it appears to do nothing. If you look at the bottom left of the dialog, however, you should see a bar with percentages that highlights the process. Once the bar hits 100%, click ā€œLimitā€ and then ā€œStack.ā€

Multi-channel processing

The workflow so far makes sense for either one-shot color or grayscale. But what about the option to use a monochrome camera with filters? How do you stack and combine then? For that, I made a video. Check it out:

Post-processing

Now youā€™ve got a stacked image. Whatā€™s next? The main thing to do is sharpen the image and remove noise. This is possible through a variety of software offerings from the free Gimp and FastStone to the paid PixInsight. I share one way to do this in my processing video.

Conclusion

Iā€™m in the process of learning, but happy to share what I learn, as I learn it. Iā€™ll write more as I learn new techniques and hope you will share your own thoughts, comments, and resources below. Thanks!

Post categories: