By Juan M. Gómez (Silvercup)
With original streched data from Dave Halliday
In this document we describe a processing workflow that focuses on enhance faint details on underexposed images using AtrousWaveletTransformation and PixelMath. We also describe using of StarMask, Deconvolution, noise reduction with ATrousWaveletTransform and MorphologicalTransformation.
For this example we have used an image of M27 acquired by Dave Halliday with a Vixen Visac at f/9. It's an integration of 25 frames of 420 seconds in the H-Alpha emmision with an ST-2000XM 2x binned. The image has an initial Histogram strech. Despite of short subexposures we are able to reveal external faint halo.
Our processing example begins duplicating the Original image and identifying it as "CloneImageSmallScales" for further Pixelmath operations. We must delete the background (and so the external halo) from this image. Using ATrousWaveletTransform we realized that unchecking 256 pixels scale we delete the background and some halo of M27 nebula.
In figure 2 we see efect of deleting 256 pixel scales on image, we delete the background.
After this, we must to enhance this image with curves in one or more steps. After this process we will have an image with small-scale structures enhanced. It's a fine tuning processing. Our goal, in the next step, is delete all the small-scales from the Original image without leaving any artifact due to stars or bright nebula.
In the second step we must "substract" the Small-Scales from the Original Image. Doing this operation we obtain an image with only the background and the faint halo. We must duplicate original image again and identifiying it as "CloneImageLargeScales". We apply Figure 4 PixelMath instance to this image. We multiply the cloned Original Image ("CloneImageLargeScales") with the inverted Small-Scale image ("CloneImageSmallScales"). See Rescale result unchecked.
On Figure 5 we can see the efect of previous PixelMath expression. We have got only the background.
In order to enhance external faint halo we must sum the Large-Scales image to the Original image, but previously we must to smooth Large-Scale image to avoid artifacts in nebula borders, around stars and background noise propagation. Unchecking all layers except 16 pixels scale in ATrousWaveletTransform do the job.
Figure 7 show background smoothing with ATrousWaveletTranform
In this step we are going to sum Original image with smoothed Large-Scale image (the background). We must apply this proccess two times due to lack of signal. Figure 8 show graphicaly this operation.
+ |
We must apply a Star Mask to Original image before adding Large-Scales in order to avoid star growth. StarMask must be perfectly matched to stars so no lack of signal around them can happen. Figure 9 show StarMask parameters, we reduce Smothness to 10 and Growth to 0.
Figure 10 shows generated Star Mask.
Figure 11 shows that Star Mask perfectly match stars on image
With stars protected with StarMask we can proceed to merging Original and Large Scales images with a simple sum PixelMath expression with Rescale result unchecked (Figure 12). The external halo is more evident, stars and bright nebula remain unchanged.
On Figure 12 we can see the efect of PixelMath expression.
If we do a second PixelMath operation we have the risk of burning brilliant halo. With HDRWaveletTransfrom we avoid burning brilliant halo and make internal structures visible.
Figure 15 shows M27 internal structure and a weak decrease in brightness in bright halo.
After HDRWaveletTransform we can do second PixelMath operation without any burning risk. See result on Figure 16.
Deconvolution actually only makes sense for linear images. As the Original image is initialy streched, deconvolution can be used as an edge enhancement technique, just as a sort of sophisticated unsharp mask filter. With non-linear images you must be less agressive with StdDev and Shape parameters and use less iterations. In this example we choose StdDev 1.50 and Shape 2.65. Notice the use of Deringing and Local deringing with previously generated Star Mask.
If we apply deconvolution to the whole image we got artifacts in the faint parts. We must aditionaly mask the image so deconvolution is applied to zones with more signal. We make the mask with a duplicate image and modifiying it with curves. Figure 18 show the Mask we use with more signal parts enhanced. We apply it inverted.
Figure 19 shows a sharper image after deconvolution process. Deringing parameters perform as expected and aditional mask protects background and faint parts.
Final Touchs: Curves and Histogram.
Finally a CurvesTransformation ajust to taste and HistogramTransformation shadows clipping due to previously processes. Figure 20 shows our ajusts.
Although PixInsight has specific algorithms to reduce noise, this time, instead of applying ACDNR, we will reduce noise by ATrousWaveletTransform. Figure 21 shows the parameters used, we only checked Noise Reduction in layers one and two. Note the use of Deringing parameter reduced to 0.0050 in Darks option to avoid stars deringing. The image is masked with previously generated deconvolution mask but this time not inverted so we will apply noise reduction on faint parts while protecting bright parts.
Figure 22 shows a 2x resampled image crop. ATrousWaveletsTransform noise reduction performs an excellent result without significant structures altered.
Figure 23 shows the whole image noiser and clean.
After deconvolution the stars tend to burn, plus the use of curves or histograms makes these grow. We will correct these defects with MorphologicalTransformation. Normally we use MorphologicalTransformation with moderate levels and in several iterations with different structuring elements. In this example, we use aggressive parameters so you can see the effect. Figure 24 shows the parameters used. Erosion with a value of 0.75 and Low Thresholds with a value of 1.0.
Figure 25 clearly shows unburned stars due to Low Thresholds increase and smaller stars with 75% erosion amount.
Figure 26 shows smaller and better star shapes.
Finally we compare our image with another M27 taken by Don Goldman with a RCOS 16" f/8.9 and an Apogee U16M with 3nm H-Alpha and OIII Narrowband filters (9 Hours integration). http://old.astrodon.com/oldsite/M27SRONB.html. I hope that Don Goldman does not care to use his image for didactic purposes. Athough in our opinion Don Goldman's image is clipped, we see than all structures are coincident. OIII emision, camera sensitivity, subexposure time, integration time and aperture make the diference.
That's all, enjoy processing