Example processing workflow

Note, 21/12/2022: there have been a lot of updates to PixInsight recently, and new processing techniques that mean I should update this article. See my Reprocessing Bonanza 2022 article for examples. I will when I have time, but until then I’ll leave this workflow here in case it’s helpful to anyone.

Every astrophotographer processes their data in different ways, so I thought it might be worthwhile to document my workflow for the Pelican Nebula using an Optolong L-eXtreme filter.

The picture below has a slider, which shows the source integrated data and the final image. Processing makes a huge difference, and is as important as your skills in obtaining good data in the first place!

I class myself as having intermediate processing skills, so I’m certainly not presenting my method as a gold standard, but there might be some useful tips in there for other imagers.

Here’s info about the source data:
* June & July 2021
* Bristol, UK (Bortle 8)
* Telescope: Askar FRA400 f/5.6 Quintuplet APO Astrograph
* Camera: ZWO ASI 2600MC-PRO
* Filter: Optolong L-eXtreme
* Mount: Orion Sirius EQ-G
* Guide: William Optics 32mm; ZWO ASI 120MM Mini
* Software: PixInsight, Photoshop, Topaz DeNoise AI, Lightroom
* Control: ASIAIR PRO
* 780 x 120 seconds (26 hours)

In general, when processing data taken with an Optolong L-eXtreme I follow this excellent tutorial:

I make a few modifications along the way, in particular my use of Lightroom and Topaz DeNoise AI.

Here’s a single 120-second subframe, debayered and with a simple stretch:

This is the integration of 780 x 120 seconds (26 hours) just with a simple stretch, before any proper editing:

Onto my workflow!

* Load integrated image into PixInsight.
* DynamicCrop.
* BackgroundNeutralisation.
* AutomaticBackgroundExtractor, with Function Degree set to 2 and Correction set to Subtraction.
* ColorCalibration.
* SCNR to remove green.
* Open STF and HistogramTransformation. Drag and drop STF’s triangle onto HistogramTransformation’s bottom bar, and then that triangle onto the image. This takes it to non-linear:

* StarNet to remove stars. Stride 128, Create star mask. Rename star_mask “stars”.
* Deconvolution [note: I forgot this step when editing this particular image, so the end result isn’t quite as sharp as it could have been].
* Save starless image as a TIFF, 16-bit unsigned integer, no compression.
* Load into Photoshop. Clone out artifacts left by StarNet.
* Load cleaned Starless TIFF into Topaz DeNoise AI. Severe Noise Model, Remove Noise = 80, Enhance Sharpness = 50, Recover Original Detail = 60, Color Noise Reduction = 50.

* Load this latest version back into PixInsight.
* Split RGB channels. Close Blue, no need to save it. Rename the red channel as R, and the green as G.
* Make G look similar in overall brightness to R. To do this use HistorgramTransformation, especially black and mid-point triangles. Multiple small steps help. CurvesTransformation can also be used. Give R a little boost too if necessary.
* PixelMath. R*0.6+G*0.4. Create new image. Press square. Rename result B.

* LRGBCombination. L = R. R = R. G = B. B = G. (That’s not a typo!) Tick Chrominance Noise Reduction. Rename result “starless LRGB”.

* CurvesTransformation. Apply separately to R and B channels. Keep the shadows low to avoid colour casts in the background. (This is explained well in Luke’s video from 16:35).

* Script -> Utilities -> CorrectMagentaStars. Amount = 0.8.

* CurvesTransformation, adjust Hue.
* Utilities -> DarkStructureEnhance.
* Mask Generation -> RangeSelection. Use limits and smoothness to select brightest parts. Apply mask to image.
* Convolution -> UnsharpMask. Tip: use preview box in real-time preview. Brighten key areas.

* Save as TIFF again. Load in Topaz DeNoise AI. Severe Noise Model, Remove Noise = 70, Enhance Sharpness = 70, Recover Original Detail = 30, Color Noise Reduction = 50.
* Load into Lightroom.

* Save as a TIFF and load back into PixInsight. Rename “ready_for_stars”
* Maximise the star_mask image.
* Apply SCNR to remove green.
* CurvesTransformation to give the the stars and their saturation a little boost.
* Convolution to soften the stars a little.

* PixelMath: ready_for_stars+0.7*stars
* One more dose of Script -> Utilities -> CorrectMagentaStars. Amount = 0.8.
* Back into Lightroom for a few minor tweaks and to export the finished photo.


Urban Astrophotography needs YOU!

7 thoughts on “Example processing workflow

  1. Mike73 says:

    Hey Lee

    So Im asking this as I’m a beginner to AP but have a few years experience with Adobe products.
    What does Pixinsight bring to the table rather than just using Photoshop with Starnet++? This was the approach I was going to take as well as using Astro Pixel Processor for stacking. Pixinsight looks like a vast and confusing program to get my head around right now! Is it that much better?

    Really enjoyed your website and stunning images tonight!

    Reply
    1. Lee says:

      Hi Mike, that’s a good question. I used to process images using DSS and Photoshop. Then a friend suggested I check out PixInsight, so I downloaded the free trial. It definitely does have a steep learning curve, but I followed some tutorials online and once I got the hang of the basics it really “clicked” with me.

      To boil it down as simply as possible, I’d say that Photoshop is a brilliant piece of software but is very broad in scope. PixInsight is designed from the ground up with only one task in mind: processing astroimages. It encourages you to think more scientifically about handling and processing your data, which really appeals to me.

      This isn’t to say that you should use PixInsight over Photoshop. A lot of very good astroimagers use Photoshop and produce fantastic images. It’s all about finding what works for you. Some people love StarTools, and I have tried that, but bounced right off it. I’ve no experience of Astro Pixel Processor so can’t comment on that, but the stacking process in PixInsight is very good.

      Regarding PixInsight’s complexity, there definitely is a lot to it, but I’d argue that it isn’t really much more complex than Photoshop — it’s just that more people have a good grounding in Photoshop, so they’re already over that initial steepest part of the learning curve. I’d also highlight that a lot of the processes in PixInsight have a gazillion different parameters that you can change, but actually work really well with their default settings. So, it doesn’t need to be as daunting as it may look. For a reference point, I only used it for the first time seven months ago, and I haven’t exactly had a lot of free time to put into learning it, but despite that I’m getting decent results.

      I hope that goes someway toward answering your question… if you do end up giving PixInsight a go, feel free to ask me questions if you get stuck and I’ll be happy to help!

      Reply
  2. Auke says:

    Hello Lee, Very helpful instruction on processing dual-band data. Helps me progress from pure red-colored pictures to something much more attrative. One question though I have about throwing away the blue channel. I was wondering whether it is possible to make better use of this blue data, since it still seems to contain usefull information about the object that I photographed (I used a ZWO dual band filter).

    Reply
    1. Lee says:

      That’s a good question! In my experience, the blue channel has always been almost exclusively noise, so I’ve never felt bad about deleting it. Maybe your ZWO filter is better in this channel? In which case, I’d consider still making an artificial Blue channel, but perhaps by combining all three channels. In my processing workflow above, there’s the step to make an artificial Blue channel using PixelMath with this equation: “R*0.6+G*0.4”. If you want to include some of the original blue data, how about “B0.5+R*0.25+G*0.25”? You could try different ratios and see what works best. Let me know if you find something that works better than my current method 🙂

      Reply
  3. Auke says:

    Will gve it a try, but may take a little while. Still on this steep PixInsight learning curve.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *