Contact & Support

How to extract Image Information by Adaptive Deconvolution

LIGHTNING Image Information Extraction

Confocal Laser Scannning Microscopy (CLSM) is the standard for true 3D resolved fluorescence imaging. Fast optical sectioning using flexible scanning strategies in combination with simultaneous multi-colour, high sensitivity and low noise signal detection provides maximum resolution in the spatial and temporal domain. In combination with modern approaches to image information extraction this helps the researcher to mine as much information as possible from the images acquired. Image information extraction refers to intelligent procedures for image enhancement using a priori knowledge from the imaging system. From simple glare control and optical development to intelligent and ingenious model extraction, there are many ways to see more than just the image.

Authors

Topics & Tags

Table of Content

Irrespective of the extremely good 3-dimensional scanning quality using CLSM, physically caused diffraction phenomena occur during imaging which are characteristic for every imaging system and can be described by the so-called Point Spread Function (PSF). An object imaged via an optical system represents the folding (convolution) of the object with the optical characteristics (PSF) of the imaging system. These diffraction phenomena produce a kind of "smearing" of the object resulting in a reduction of the effective resolution and an incorrect imaging of the exact position of the individual photons. In addition, background and noise effects occur during the scanning of biological samples, which in turn contribute to a further reduction of the actual information content of the image raw data.

The entirety of these effects can be displayed with a pyramidal extension of the magic triangle of microscopy consisting of resolutionspeedsensitivity and spectral range which are incompatibly located at the respective corners. Increasing one of the accessible areas results in decreasing the others accordingly. 

But there are ways to push these limits further: Solutions to widen the pyramid’s corners and to dissolve its dependencies are the identification of interfering signals via sophisticated intelligent models and the correlation of individual photons with their original location via the process of deconvolution.

LIGHTNING makes it possible to penetrate these very limits in near real time and to extract the original information of confocal data in a highly efficient manner, thus extending confocal imaging not only beyond the diffraction limit [1] but moreover to push the effective sensitivity and temporal resolution in parallel. A clear image of the true nature of the underlying specimens is obtained by adaptive image information extraction. LIGHTNING's ground truth based adaptive deconvolution represents the best possible procedure for quantifiable, reproducible and trustworthy information recovery compared to classical methods as described in the following.

LIGHTNING

LIGHTNING is a new method for fully automated intelligent information extraction from confocal data in near real time using extremely fast parallel GPU processing. The main difference to conventional methods lies in the voxel-precise and on-the-fly evaluation of image properties. This process is fully connected to the imaging system’s optical and detector based interfaces, thus fully integrated into the respective data and data acquisition stream. Upon this basis the optimum parameters for the subsequent deconvolution are determined for each associated volume segment. This adaptive process of correlating the deconvolution parameter space with the local image properties enables a fully automated and voxel-accurate information recovery not only within the corresponding image data, but in particular for any biological sample and application (see Figure 1). Another feature of this method is the preservation of information-carrying signals due to the site-specific optimal reconstruction procedure.

In contrast, traditional deconvolution methods use a global approach that does not take location-dependent differences in image properties into account. This means that these processes cannot be applied fully automatically, but are always based on a 'best guess' approach, through which a maximum possible trade-off in the creation of the global deconvolution parameter space is sought. The disadvantage of such an approach is naturally that it does not take inhomogeneities in the image properties into account, which makes it highly likely that signals carrying information will be mistakenly rejected or, conversely, that unwanted signals such as background or noise will be interpreted as information units and enhanced.

Within LIGHTNING original confocal data is always retained for classic deconvolution (non-adaptive) or post-processing procedures. Moreover, LIGHTNING can also be used in a non-adaptive mode similar to classic deconvolution.

Classical Deconvolution

Ideally, deconvolution aims to remove out-of-focus signal not by discarding out-of-focus signals, but by re-assigning the signal to its original position, thus preserving the total signal or photon counts in the imaging volume. General methods for deconvolution are based on a suitable pre-processing of the microscope raw data followed by the actual, classical deconvolution (see Figure 2).

Pre-Processing serves to revise the image raw data using generic methods from image processing and to prepare it for deconvolution. As a rule, smoothing methods for determining the background signal and the signal-to-noise ratio (SNR) are used for this purpose. Deconvolution is performed after this procedure using previously defined global deconvolution parameters. These parameters include values of the microscope characteristics like excitation/emission wave lengths and specifications of the objective lens as listed in configuration files of the microscope hardware, for example. Moreover, flexible values like generic background and SNR, but especially deconvolution related parameters (see below) need to be set by the user to define the parameter space for deconvolution. These values have a tremendous impact on the accuracy and trustworthiness respectively of the deconvolution.

Independent of the microscope configuration, relevant and globally set parameters are defined by the number of deconvolution steps in the iterative deconvolution process and by regularization (see Figure 3). The number of iterations and the regularization determine a measure of the accuracy/trustworthiness of deconvolution and must be very carefully balanced mainly based on the SNR. The regularization procedure takes place between the individual deconvolution steps, which are usually based on a so-called Richardson-Lucy procedure using Fast Fourier Transformations (FFT).

Regularization: The regularization parameter represents a measure to what extent a signal is interpreted as background or noise by the algorithm. Correct estimation of this parameter is therefore critical to avoid generation of artifacts (background or noise, which is interpreted as an information-carrying signal) or, conversely, for the sorting out of information-carrying units that have been falsely identified as background or noise.

Iterations:  The deconvolution procedure itself is performed iteratively until a suitable abort criterion is reached, whereby the process of deconvolution is stopped and the data is fully processed.

Thus, the entirety of the deconvolution parameter space is composed of parameters that are linked to:

  1. Microscopic hardware, imaging and experimental setup: Excitation/emission wave lengths, objective lens, resolution, sample substrate, immersion/embedding media, etc.
  2. Image characteristica: Background, SNR, regularization, number of iterations, etc.

LIGHTNING - Adaptive Deconvolution

The LIGHTNING deconvolution approach is based on a completely new adaptive method, which reads out local image properties during image acquisition (pre-processing) and extracts suitable deconvolution parameters for the regularization procedure. This enables fully automated deconvolution independent of manual user input (see Figure 4).

1. Pre-Processing

Within the pre-processing step, local image properties with regard to background and signal-to-noise ratio are determined with voxel accuracy:

Background: First a global background bglobal is identified, which is put in relation to the corresponding local signal-to-noise ratio for a local background estimation b(x,y).

Signal-to-Noise Ratio: The signal-to-noise ratio is determined by a suitable estimation of the gray values g(x,y) of each pixel depending on its neighborhood and a specific kernel ƒb for smoothing.

2. Decision Mask

The key element of LIGHTNING is based on the application of an adaptive dynamic process operating on-the-fly and fully integrated into the systems data stream, which uses the best possible procedure for obtaining information based on local image properties. For this purpose, the underlying image properties with regard to background and signal-to-noise ratio are extracted for each voxel / volume segment as described above and the optimal, voxel-accurate parameter sets are provided for the subsequent deconvolution on this basis.

From the determination of the specific background and signal-to-noise information LIGHTNING generates a so-called Decision Mask in n dimensions with n being the number of data acquisition channels (e.g. n=3 for a xyz data stack, see Figure 5).

Information from each voxel resulting from the Decision Mask is correlated with an associated deconvolution parameter set via an adaptation coefficient. The adaptation coefficient is directly related to the regularization parameter and translates the local image properties of the confocal data into suitable deconvolution parameters for each voxel (see Figure 6).

The entire process for generating the Decision Mask is based on generic image processing methods and is therefore fully quantifiable, i.e. no intensity or localization based characteristics of individual photons and photon counts respectively are changed. This process only extracts information from the confocal data and does not make any modifications. The Decision Mask thus defines the local image quality characteristics in terms of background and SNR voxel-by-voxel in the confocal data to feed the deconvolution process, i.e. regularization and number of iterations in an adaptive way.

3. Deconvolution

The actual deconvolution step is based on the use of a Richardson-Lucy algorithm and a physically modeled Point Spread Function, which is adapted to the respective imaging method (confocal, STED, multiphoton, etc.). The underlying model was adapted according to the publications [2, 3, 4] and optimized for the Leica system environment. The result is an optimal reconstruction procedure for each voxel / volume segment, which excludes unwanted signals such as background and noise and at the same time preserves and reveals information-bearing structures.

The adaptivity of deconvolution is reflected in the application of the deconvolution parameter space extracted from the Decision Mask per voxel. Furthermore, the pure deconvolution procedure corresponds to the classical, conventional procedure as described above. The abort criterion for the number of iterations is fully automated and is defined by a continuous comparison of the image of the last iteration step executed with the one from the previous iteration. The iteration is terminated as soon as the comparison images of the last two iterations no longer show any differences in their essential characteristics.

In a final step, the photon number of the pre-processed image (prior to deconvolution) is used to normalize the deconvolved image. This means that the output images obtained are always comparable with each other, since no maximum-value or variable-factor based normalisation occurs. Figure 9 shows a confocal plane of a kidney section that has been corrected with LIGHTNING: The pseudocolour representation (Figure 9, mid) shows photons in blue colour, which either were excluded from the respective volume segment (background or noise) or were re-assigned to another, i.e. their original location (information-carrying photons/signals). Values in red colour indicate information-carrying photons/signals, which were re-assigned to their original volume segment through deconvolution. Note that information-carrying photons which form a "visible" structure in the confocal data and are not represented in the deconvolved image were re-assigned to adjacent image planes that are not displayed here.

On the one hand, this process enables fully automated handling and on the other hand, a highly precise, quantitative reconstruction of the observed signals. This sets LIGHTNING fundamentally apart from classical methods that cannot take local variabilities in image data into account using globally applied procedures and thereby either erroneously sort out or retain signals.

Figure 10 shows a typical application for a data set with a very low number of photons per pixel acquired at 44 frames per second. The voxel-specific differences between background, noise and information-carrying signal are very small in this case, which means that the consideration of local variances has an enormous influence on the reconstruction scheme. The respective effects of background and SNR clearly illustrate the advantages of the adaptive, ground truth based deconvolution or rather the failure of an approach based on a global deconvolution ‘best guess’ estimation.

This example demonstrates how LIGHTNING helps to reduce the effective light dosage and thus photobleaching by making low photon count data available for analysis.

Non-adaptive Deconvolution using LIGHTNING

As well as the adaptive deconvolution described above, LIGHTNING generally offers further strategies that make no use of the Decision Mask as a basis for defining the deconvolution parameter space. These strategies rather use the known, generic approaches via a globally effective parameter set for reconstruction, thus matching the traditionally known deconvolution procedure.

LIGHTNING - Quantifiability

As already described above, the process for generating the Decision Mask is subject to linear information extraction purely based on proven image processing methods. The procedure does not modify the confocal data prior to deconvolution, which means that this important aspect of LIGHTNING fully reflects a quantifiable framework in this context. After the extraction of voxel-by-voxel image characteristics and transformation into corresponding deconvolution parameters in terms of regularization parameter and number of iterations, this information is fed into the subsequent deconvolution process.

The deconvolution itself works analogously to conventional methods (see above) and is therefore subject to the same characteristics for this type of reconstruction for quantifiability with regard to its (non-) linearity [5]. Regardless of the use of an adaptivity-based deconvolution parameter space, on the basis of which locally varying deconvolution strategies are applied, no local and thus relative intensities are distorted during reconstruction: The correlation width of the deconvolution process corresponds to the width of the associated PSF, whereby locally varying deconvolution parameters remain constant within this correlation width. This ensures that the variation of the deconvolution parameters is sufficiently slow to completely avoid such intensity-based effects.

Disregarding the exclusion of background and noise interferences an essential feature of LIGHTNING is the

  • Preservation of the sum of all intensities
  • Preservation of the photon number

of the pre- and post-deconvolved images.

For each processing step the corresponding key figures for both intensity and photon number are compared before and after the deconvolution which means that the sum of intensities and the number of photons are fully quantifiable.

In a final step, the photon number of the pre-processed image (prior to deconvolution) is used to normalize the deconvolved image. This means that the output images obtained are always comparable with each other, since no maximum-value or variable-factor based normalisation occurs. Figure 9 shows a confocal plane of a kidney section that has been corrected with LIGHTNING: The pseudocolour representation (Figure 9, mid) shows photons in blue colour, which either were excluded from the respective volume segment (background or noise) or were re-assigned to another, i.e. their original location (information-carrying photons/signals). Values in red colour indicate information-carrying photons/signals, which were re-assigned to their original volume segment through deconvolution. Note that information-carrying photons which form a "visible" structure in the confocal data and are not represented in the deconvolved image were re-assigned to adjacent image planes that are not displayed here.

By using the adaptive, ground truth based approach (Decision Mask), the probability of generating artifacts or excluding information-carrying signals is reduced to a minimum. In fact, LIGHTNING's adaptive deconvolution represents the best possible procedure in terms of quantifiability compared to conventional methods.

LIGHTNING – Confocal Super-Resolution

Through the process of deconvolution, photons and associated intensities are re-assigned back to their original state, which also reduces effects such as diffraction phenomena from the optical image to a minimum. This can significantly increase the effective resolution of the optical system. LIGHTNING enables a resolution improvement of down to 120 nm in lateral and 200 nm in axial direction.

The Rayleigh Criterion [6] defines the limit of resolution in a diffraction-limited system, in other words, when two points of light are distinguishable or resolved from each other. If the diffraction patterns from two single Airy Discs do not overlap, then they are easily distinguishable, ‘well resolved’ and are said to meet the Rayleigh Criterion (see Figure 11, left). When the centre of one Airy Disc is directly overlapped by the first minimum of the diffraction pattern of another, they can be considered to be ‘just resolved’ and still distinguishable as two separate points of light (see Figure 11, mid). If the Airy Discs are closer than this, then they do not meet the Rayleigh Criterion and are ‘not resolved’ as two distinct points of light (or separate details within a specimen image; see Figure 11, right).

Figure 12 shows the comparison of a confocal data set with a data set acquired under LIGHTNING conditions. The objects show molecular nanorulers (SIM 120, Gattquant GmbH) carrying two fluorescent marks at a distance of 120 nm to each other. The confocal data set shows the typical, fused distribution of a diffraction-limited object, whereas LIGHTNING is able to resolve the respective tubes and thus their distance of 120 nm. Nanorulers which still appear as a ‘smeared’ single point or are not visibly resolved in the image are not a consequence of an unclean reconstruction. These objects are randomly rotated on the sample holder and therefore cannot be imaged perpendicular to the detection axis of the microscope. An effect that can be measured under these boundary conditions.   

Summary

Despite the emergence of new imaging methods in recent years, true 3D resolution is still achieved by Confocal Laser Scanning Microscopy (CLSM) as standard. Through a combination of novel, extremely fast scanning methods with high sensitivity, low noise detectors and simultaneous multi-spectral data acquisition, Confocal Laser Scanning Microscopy could be exclusive expanded by Leica to such an extend that previously inaccessible dynamic and spectral ranges became accessible. Nevertheless, the limitations imposed by the physical optical imaging properties and by background and noise-induced interference effects in terms of effective spatial and temporal resolution are still a limitation for all imaging methods.

With the introduction of LIGHTNING, a completely new system integrated module is available which significantly penetrates these very limits in near real time and pushes the corners of the magic pyramid, consisting of resolution, sensitivity, speed and spectrum (spectral range). Thus, a clear image of the true nature of the underlying specimens is obtained by adaptive image information extraction.

The use of a ground truth based adaptive deconvolution allows a highly reliable fully automated extraction of image information completely independent of manual user input, that would not be accessible due to diffraction phenomena and the biological properties of the specimen to be examined. Thus, resolutions far below the theoretical diffraction limit can be achieved or image information revealed which, although spatially and temporally structurally present in the confocal data, were not visible due to diffraction and noise before.

LIGHTNING not only increases the effective spatial, but also the effective temporal resolution enabling the accessible spectral range to be extended tremendously in parallel. Using LIGHTNING it is no longer necessary to design the experimental setup in such a way that the information-bearing structures are visibly mapped in the confocal data. In fact, this information is already contained in image data acquired with low photon numbers and highest scanning speeds respectively. LIGHTNING extracts the underlying information layer fully automatically resulting in effective confocal scanning speeds of up to 428 frames per second using 5 colours simultaneously. This is comparable to optical methods that are not designed for point scanning and thus do not have access to true confocal resolution.

LIGHTNING is the first step towards intelligent detection methods fully anchored in the imaging system. Its corresponding modules will continuously be expanded and improved using novel and innovative digital technologies to extract maximum information from every biological sample under every condition.

Download

Download Whitepaper PDF "How to extract Image Information by Adaptive Deconvolution"