Definitions of Basic Technical Terms for Digital Microscope Cameras and Image Analysis

Basic imaging principle of digital imaging. Basic_imaging_principle_of_digital_imaging_teaser.jpg

Most microscopes today are operated with a camera. The characteristics of the camera often decide whether the acquired image will reveal what a researcher wants to see. But when diving into camera terminology, the technical terms can be overwhelming. We have compiled the most important terms with a concise explanation to provide orientation. They are ordered alphabetically.

More about the basic principles behind digital camera technologies and how microscope cameras work you can read in this Introduction to Digital Camera Technology.

Binning

Binning is a technique to boost camera frame rate and dynamic range whilst reducing noise by sacrificing resolution. It is often used for high speed fluorescence time-lapse experiments. Rather than reading out the data of each individual pixel, data of adjacent pixels are combined and read out together as a super pixel. Binning values between 2x2 and 8x8 are often used. It is import to note that 2x2 binning generates a pixel that is 4 times the size of an original pixel.

The effect binning has depends on the sensor type used in the camera, as indicated in the table below.

 

Speed

Data Volume

Resolution

SNR

CCD

EMCCD

CMOS

sCMOS

Bit-depth

The bit-depth of a camera sensor describes its ability to transform the analog signal coming from the pixel array into a digital signal, which is characterized by gray levels or gray scale values. It is a feature of the AD converter. The bigger the bit depth, the more gray values it can output, the more details can be replicated in the image.

Brightness

Brightness describes the relative intensity affecting a person or sensor. In the case of a digital image, the intensity is averaged throughout the whole sensor.

Color look-up table (CLUT)

Digital images are composed of an array of individual pixels. Their color information can be stored as a number code where every color is deposited as a distinct numerical value.

A color look-up table is an index, storing these values which are mainly based on RGB color space, generally used for monitor presentation.

The selection of a suitable color look-up table for a specific use depends on the user's own judgment and needs. Experience demonstrates however, that certain color look-up tables are particularly useful for specific applications. For example the CLUT "Green" is commonly used for recording specimens that are marked with Alexa 488, FITC, or other similar fluorescent dyes, which emit within the green spectral range. On the other hand, the CLUT "Red" is used for samples stained with TRITC, Texas Red, Cy3, or other similar fluorescent dyes, emitting light within the red spectral range.

"CMYK" is a special color look-up table to deal with the CMYK color space, generally used for the printer system color output.

Color space (RGB, CMY, CMYK)

In every imaging system (i.e. monitor, print-out) any color depiction is based on a combination of single basic colors. Imaging methods are differentiated through additive and subtractive color mixing. For example, on a black monitor screen it is necessary to emit a certain type of light to yield a given color. In that case the type of light is based on red, green or blue (RGB).

If all three colors are illuminated, white is created. If all three colors are switched off, black is created. The human eye, as well as digital cameras and monitors are adapted to the RGB model.

On the other hand, printers use a subtractive color mixing because on material surfaces like paper, light has to be reflected from a white substrate (paper). As a result, a printer needs to calculate which ink has to be added to yield a given color in combination with the white substrate. In that case the combination of cyan, magenta and yellow (CMY) – the complementary colors of red, green and blue – are the base for all the other colors in the spectrum. In this model, the addition of all three colors results in black, while the absence of all three colors results in white.

Note: In practice black is printed as a separate ink to avoid the use of too many colors on top of each other and to get a more vivid black impression. For this reason the color space is also called CMYK, where K stands for the key plate, a special black printing device.

Contrast

The contrast of an image depends on the difference in color and intensity of the depicted object from its background. Expressed in a mathematical formula, contrast (C) can be described as a ratio (in %) of intensities (I).

As demonstrated, the more significant the difference is between specimen and background intensity, the better the contrast will be.

Referring to microscopy, to produce contrast the specimen has to interact with light, for example by absorption, reflection, diffraction or fluorescence.

Deconvolution

Deconvolution is a technique to reassign out-of-focus information to its point of origin in a microscopic image by applying a mathematical algorithm. By doing so, the user can achieve sharper pictures of specific focus levels and more realistic 3D impressions of their structure of interest.

Dwell time

In confocal microscopy, the laser beam is permitted to scan a certain area (in the dimension of a pixel of the equivalent image) for a given time. This time is called dwell time. Plausibly, extended dwell times encourage photo bleaching and stress the specimen.

Dynamic range

The dynamic range of a microscope camera gives information on the lowest and highest intensity signals a sensor can record simultaneously. With a low dynamic range sensor, large signals can saturate the sensor, whereas weak signals become lost in the sensor noise. A large dynamic range is especially important for fluorescence imaging.

Exposure time

The exposure time of a digital camera determines the duration the camera chip is exposed to light from the specimen. Depending on light intensity, this time can typically range between several milliseconds and a few seconds for most imaging applications.

Gain

Digital cameras translate photon data into digital data. During this process, electrons coming from the sensor run through a pre-amplifier. Gain is the amplification applied to the signal by the image sensor. It should be noted that not only the signal, but also the noise is boosted.

Gamma (Correction)

The human eye’s light perception is non-linear. Our eyes would not perceive two photons to be twice as bright as one; we would only recognize them to be a fraction brighter than one. In contrast to the human eye, a digital camera’s light perception is linear. Two photons induce twice the amount of signal as one. Gamma can be considered as the link between the human eye and the digital camera.

This can be expressed in the following term, where Vout is the output (detected) luminance value and Vin is the input (actual) luminance value:

Vout = Vingamma

By changing gamma – doing gamma correction – it is possible to adapt the digital image taken with the help of a linearly recording camera to the nonlinear perception of the human eye. This correction can be done by most camera chips. Furthermore, digital imaging software often has its own gamma correction option.

Intensity

Intensity is an energy classification. In the field of optics the term radiant intensity is used to describe the quantity of light energy emitted by an object per time and area.

Noise

Noise is an undesirable property inherent in all measurements. It is a major concern for scientific images as it can affect your ability to quantify signals of interest. The most important parameter to consider when imaging is the signal-to-noise ratio which is the ratio of noise in your image relative to the amount of signal you’re trying to collect. Noise can be classified into several categories:

Optical noise: Unwanted optical signal often caused by high background staining, resulting poor sample preparation or high sample auto fluorescence.

Dark noise: Thermal migration of electrons in the sensor and directly proportional to the length of integration. Dark noise can be overcome by cooling the imagining sensor or decreasing exposure time.

Read noise: An electrical noise source introduced to the signal as the charge is read out from the camera sensor. Read noise can be reduced by slowing sensor readout rate, thus reducing maximum achievable frame rates, or switching to more advanced sensor types, i.e. EMCCD and sCMOS sensors.

Photon shot noise: Noise inherent in any optical signal caused by the stochastic nature of photons hitting the sensor. This is only of concern to very low light applications. Collecting more signal reduces the impact of shot noise in an image.

The simplest way to improve your signal-to-noise ratio is to collect more signal by integrating for longer or increasing illumination intensity. These approaches are not always feasible at which point lower noise cameras are required.

Nyquist theorem

Imaging in microscopy implies a sampling process - from a specimen signal to a digital image. The Nyquist Theorem describes an important rule for sampling processes.

In principle, the accuracy of reproduction increases with a higher sampling frequency.

The Nyquist Theorem describes that the sampling frequency must be greater than twice the bandwidth of the input signal to recreate the original input from the sampled data. In the case of a digital camera this manifests principally in the pixel size. For best results, a pixel should always be three times smaller than the minimum structure you want to resolve, or in other words, a minimum number of 3 pixels per resolvable unit is preferable.

Pixel

A pixel in a camera is the basic light-sensitive unit of its sensor. This applies to all two-dimensional array sensors including CCD, EMCCD, CMOS, and sCMOS microscope cameras. The number of pixels on a sensor is a frequently quoted unit, i.e. a 5-megapixel camera has 5,000,000 pixels. The number of pixels is often confused with the resolution of the sensor as individual pixels can vary significantly in size on different sensor types.

Quantum efficiency (QE)

The quantum efficiency of a sensor gives you an indication how sensitive it is. The quantum efficiency describes the percentage of photons striking the sensor for a given wavelength that will be converted into electrons. The QE curve of a sensor varies at different wavelengths.

RGB/Grayscale histogram

Each pixel of an image has a certain gray-scale value. The spectrum of gray-scale values ranges from pure black (0) to pure white (255 at 8-bit color depth, 4095 at 12-bit color depth etc.).

Histograms show the distribution of gray-scale values within area regions of interest (ROI), i.e. the number of pixels is determined for each gray-scale value and the result is shown as a curve.

With the help of a histogram, diverse settings such as camera exposure time can be optimized. A smooth distribution of gray-scale values (x-axis) indicates the optimal exploitation of the camera’s dynamic range.

Line Profile

This tool measures gray-scale values along linear regions of interest (ROI), displays them graphically as a curve, and carries out statistical processing on them.

Stack Profile

This tool measures mean gray-scale values using area regions of interest (ROI), displays them graphically as a curve, and carries out statistical processing on them.

Saturation

The basic working principle of a digital camera implies that photons hitting the photodiodes induce electrons which are collected, moved, and finally converted into a digital signal. Concerning the transfer of electrons, there are two bottlenecks (in a CCD camera):

  • The charge capacity of individual photodiodes (Full-well capacity)
  • The maximum charge transfer capacity of the camera chip

If either is exceeded, the additional information cannot be handled by the camera leading to artefacts in the digital image (e. g. blooming).

Note: The Look-Up table Glow (O&U) in the LAS X software can help controlling saturation.

Sensor types for microscope cameras (CCD, EMCCD, CMOS, sCMOS)

CCD Microscope Camera: Microscope cameras based on a Charged Coupled Device (CCD) sensor mainly find application in brightfield and basic fluorescence imaging techniques. Like in any other digital camera sensor, its single pixels generate a charge upon irradiation with light, which is transformed into a digital signal in the end. In comparison to CMOS type sensors, only a single output node is used for data collection in a CCD sensor.

EMCCD Microscope Camera: In simple terms, an EMCCD (Electron Multiplying Charged Coupled Device) sensor is a CCD sensor with the addition of a special EM gain register, which is placed between the sensor and the readout electronics. This register amplifies the signal. Moreover, EMCCD sensors can be back-thinned with a typical peak quantum efficiency of more than 90%. Especially extreme low light applications benefit from the utilization of EMCCD camera.

CMOS Microscope Camera: Complementary Metal Oxide Semiconductor based CMOS cameras were originally used in cell phones and low end cameras. Since their technology has improved, CMOS microscope cameras became a major imaging device for standard brightfield microscopy. In contrast to CCDs, CMOS cameras feature intra pixel electronics. Their readout principle with thousands of readout nodes saves time, since traditional CCD sensors use only a single readout node.

sCMOS Microscope Camera: Scientific CMOS cameras – or sCMOS cameras – evolved from CMOS microscope cameras. Specifically adapted to scientific requirements, this type of sensor is free from common drawbacks like high noise level and poor homogeneity, CMOS sensors can suffer from. Their fast frame rates, high dynamic range, and low noise perfectly support high-end fluorescence imaging applications.

Signal-to-noise ratio

The signal-to-noise ratio (SNR) measures the overall quality of an image. The higher the SNR, the better the image. Signal refers to the number of photons originating from the object of interest collected by the sensor and converted into an electrical signal, while noise here refers to the stochastic nature of photon impacts on the sensor. This fluctuation in the number of detected photons is called Photon Shot Noise. Other noise contributions are Dark Current Noise of the detector, Read Noise from the AD converter, Background of the sample, room illumination etc.

Related Articles

Related Pages

Interested to know more?

Talk to our experts. We are happy to answer all your questions and concerns.

Contact Us

Do you prefer personal consulting? Show local contacts

Scroll to top