Deconvolution

Image Processing for Widefield Microscopy

July 05, 2012

Fluorescence microscopy is a modern and steadily evolving tool to bring light to current cell biological questions. With the help of fluorescent proteins or dyes it is possible to make discrete cellular components visible in a highly specific manner. A prerequisite for these kinds of investigations is a powerful fluorescence microscope. One special aim is the three-dimensional illustration of a structure to get an impression of full plasticity. This poses a certain problem for the experimenter using a classical light microscope.

To create a 3D illustration of a fluorescing structure it is necessary to take a bundle of 2D images which have to be combined to form a stack later on. In doing so, one has to consider that the information of each single 2D picture contains not only information from this focus position, but also information from other levels (out-of-focus signals). Scattered light leads to the formation of a rather fuzzy image with some collateral fluorescence data, distorting the visualization of the real object. Deconvolution is a technique to get rid of this out-of-focus information by applying a mathematical algorithm. By doing so, the user can achieve sharper pictures of specific focus levels and more realistic 3D impressions of his structure of interest. 

The problem

Fig. 1: Projection of a point-shaped light source by an optical system. A sub-resolution fluorescing latex bead stands for a point-shaped light source. This perfect round shape cannot be reproduced by an optical system. Due to stray light, the xy view of a bead shows some blur around the object. After acquisition of a z-stack the xz projection appears as a sand glass, which is typical for the distortion of a point-shaped object by an optical system.

Every experimenter using a fluorescence microscope wants to have as detailed information of his structure of interest as possible. The problem here lies in the limited capability of an optical system to produce a realistic picture. Every light source, for example a GFP-coupled protein, emits scattered light. In practice this can lead to a blurry signal, depending on the thickness of the specimen. To overcome this problem, different approaches have been developed in the past. On the one hand, confocal microscopy excludes out-of-focus information by using a sophisticated positioning of pinholes in the excitation and emission light pathway. This leads to an image with high z-resolution (= low focal depth) and without any out-of-focus contribution. Due to the small confocal volume which creates the image during the scanning point after point, the user has to work with a high light dose provided by a laser. So the benefit of a sharper image is offset by a considerable disadvantage. The high energy amount may produce bleaching and in general damage to the cell (phototoxicity). Therefore, a confocal system may not always be the best choice for life cell imaging. A conventional (widefield) fluorescence system has advantages in terms of higher sensitivity with lower exposure. With a low light dose cells are not damaged and fluorescence stays longer. As already mentioned, this benefit is gained at the expense of worse resolution.

To solve this problem it makes sense to look at a very small structure – a sub-resolution latex bead. Watching this 3D fluorescent object with a fluorescence microscope in xy orientation leads to the projection of a glowing point with a blurry surrounding (s. Figure 1). An acquisition of a z-stack results in the following depiction: The side view of the bead resembles two cones standing on top of each other's apex. This is due to stray light which is recorded during the z-stack acquisition. The goal of deconvolution is to subtract this "false" information from the real situation with the help of a mathematical calculation.

Point Spread Function

To understand the basics of this procedure it is necessary to introduce a special term which is very often utilized when it comes to deconvolution: Point spread function.

To repeat, if an experimenter wants to obtain a three-dimensional impression of his object, he is forced to assemble a 3D image out of a sequence of 2D images. That is why one has to record a z-stack and put these pictures together. The result of such an approach is shown in Figure 1 (right). As one can see, the product is not a perfectly round sphere (latex bead) which is due to the problem with stray light described in the first paragraph. In principle, this phenomenon stems from the limited skills of an optical system to describe a point-shaped light source. The signal which passes through the lenses of the microscope is distorted depending on the adjustment of the system, the wavelength, the objective and its numerical aperture (NA) or the refraction index of the immersion media and other parameters. The result of all these influences – concerning the depiction of a point-shaped object by an optical system – is described as a point spread function (PSF). In physical terms, the object is convoluted (folded) by the PSF. This also means that by knowing the PSF it is possible to unfold the object again, which is logically named deconvolution (s. Figure 2).

Fig. 2: Point spread function (PSF). Depiction of a point-shaped object by an optical system is influenced by several parameters like the adjustment of the system, the objective and its numerical aperture and the refraction index of the immersion media. Altogether this leads to a distortion of the object, which can be described by a mathematical value, the PSF. In physical language the object is convoluted (folded) by the PSF. Logically, by knowing the PSF, the object can be unfolded. This process is called deconvolution.

The question in this respect is how to get the PSF. In principle there are two ways to gain this information. An admittedly more precise method is the measurement of the PSF. For this technique one has to determine the PSF of a sub-resolution object with a known dimension.  This route is not very often taken for practical reasons. An easier and faster way is the calculation of the PSF. Providing information like excitation and emission wavelength peaks or the NA of the used objective, this theoretical value is estimated by a computer algorithm.

Deconvolution – prerequisites and forms

As mentioned above, with knowledge of the PSF – even if it is a theoretical figure – it is possible to reverse the convolution which was made by the optical system. But before deconvolution can be started, several criteria have to be fulfilled to get reliable and high quality results. Of course, the better the original pictures are, the better the deconvolved image will be.

A very important requirement for deconvolution is a high quality CCD camera with a linear response over a broad dynamic range. Furthermore it is essential not to overexpose the image. In this respect it is a good idea to use a tool in the color Look-up Tables (LUT) which is called "Glow over glow under" (LAS-AF). This instrument depicts saturated regions and gives the user the possibility to correct them by regulating gain and offset of the detectors. Perfect contrast for deconvolution can thus be obtained.

For optimal deconvolution results, imaging of the z-stack should be started above the object of interest, where relevant structures are still slightly out of focus. The same is true for the lower end. The experimenter should stop image acquisition just beneath the optimal focus.

With an optimal setup and image acquisition, the rest of the work is delegated to a computer. By using different algorithms, deconvolution starts with the help of a suitable PSF to recalculate the real properties of the specimen. This is an iterative process and needs a powerful computer system. Some of the main algorithms are listed here:

 No neighbors

 Nearest neighbors

 Deconvolution in the broader sense

Linear methods

 Deconvolution in the narrow sense

Wiener filter, inverse filtering

Linear least squares (LLS)

Constrained iterative

Jansson van Cittert

Nonlinear least squares

Statistical image restoration

Maximum likelihood

Maximum a posteriori

Maximum penalized likelihood

Blind deconvolution

Table 1: Different algorithms for deconvolution in the broader and in the narrow sense

It is noteworthy that the first two algorithms (No neighbors and Nearest neighbors) are not in the group of algorithms for "deconvolution in the narrow sense". These algorithms are classified as "deconvolution in the broader sense" because they more or less filter the original signal. Consequently, there is a lot of information loss, which is not the case for real deconvolution. To sum up, the first two methods are deblurring processes which subtract the estimated blur. Their great advantage is speed, because they take much less time than classical deconvolution. Figure 3 shows the effect of deblurring on signal intensity. Whereas the original image shows a maximum intensity value of 1,842, the deblurred image has a maximum intensity value of about only 550.

Fig. 3: Deconvolution in the broader sense – Deblurring: Deconvolution in the broader sense is not a real deconvolution, applying a PSF. This deblurring process is more a filtering mode where a lot of sensitivity is lost. Whereas the original image has a maximum of 1,842 intensity values, the nearest neighbor filtered image has only about 550.

Figure 4: The iterative process of deconvolution starts with the estimation of the microscopic object. After that an artificial convolution is carried out with a PSF which was calculated before. Then the real image stack is compared with the artificially convoluted object. With it, an improved estimation of the object is achievable and the whole process is repeated. The result is a final image restoration.

One of the most commonly used algorithms for this purpose is "Blind Deconvolution" (s. Table 1). In this case, an adaptive PSF is utilized, which means that the PSF is modified during the procedure. Also, the object estimation is adapted. This kind of deconvolution has the reputation of being very robust. Figure 6 is an example of blind deconvolution. A z-stack of a stack of human adenocarcinoma cells with triple staining (DAPI: nucleus, GFP: intracellular vesicles, Alexa568: plasma membrane) was recorded and a blind deconvolution was carried out. The first obvious improvement is the elimination of blur around the objects. Furthermore, orthogonal profiles (xz, zy) show the minimization of typical sand glass shapes. In addition, there is a gain in contrast and intensity. Whereas the original image has a maximum of 1,842 intensity values, the deconvoluted form has a maximum of 18,875. This again shows the difference between real deconvolution and deblurring, where sensitivity is lost.

In contrast – coming back to real deconvolution – it should be mentioned that after this process there is a gain in sensitivity. This image restoration re-assigns blurred signals (out-of-focus signals) to the position where they were generated. The information increase is based on the following iterative procedure: First, the microscopic object is estimated. Then an artificial convolution is carried out with a PSF. After that the real image stack is compared with the artificially convoluted object. By doing so, an improved estimation of the object is possible and the whole process starts again, ending in a final image restoration (s. Figure 4).

The ultimate goal of deconvolution is shown in movie 1, where all the information which was won is put together in one three-dimensional picture of the object. After calculating several thousand rounds of deconvolution, the experimenter is rewarded with a view of his specimen from every thinkable angle of space.

Figure 5 and video: Blind deconvolution: Human adenocarcinoma cells cells were imaged in a z-stack. After blind deconvolution blur is reduced. Furthermore there is sharper contrast and higher intensity. Additionally, sand glass-shaped distortion of the object is minimized.

To sum up, with the powerful tool of deconvolution it is possible to eliminate out-of-focus information, which is mirrored in a reduced blur. This is a precondition for all colocalization analyses, because in raw data the out-of-focus signals from separate objects may overlap and be misinterpreted as colocalization. Additionally there is a gain in image sensitivity, compared to the original, without irradiating the object excessively with a high energy light source. Finally, a 3D impression of the object can be enjoyed, which might help to realize cellular conditions in a very realistic view. All these improvements are based on high-performance hard- and software, which helps to make deconvolution an attractive and sometimes even more viable alternative to confocal microscopy.

All images with courtesy of Karl-Heiz Körtje, Application Specialist Research in the Field Support Team Europe, Leica Microsystems

Comments