The Scientific CMOS Camera K8

For Life Science Imaging Applications and Analysis


Show answer What is a CMOS sensor/camera?

The term CMOS refers to a type of image sensor. The two main type of image sensors commonly used in cameras today are CCDs (Charge Coupled Device) and CMOS (Complementary Metal Oxide Semiconductor). Both are two-dimensional arrays of pixels; each pixel records the amount of light in a different region of the image. CCD and CMOS sensors have different electronic layouts which confer differ properties on each sensor type. A few years ago, CCD sensors were preferred for scientific imaging applications as they offered better image quality. However, recent advances in CMOS sensor design enable them to capture high-quality images that are comparable to CCD sensors while also offering additional performance benefits.

Show answer What are the advantages of a CMOS cameras?

The most important attributes of a sensor are noise levels and quantum efficiency (QE) which together determine a camera’s sensitivity, the number of pixels (resolution) and frame rate. These properties are all interconnected and determined by the architecture of the image sensor. Read noise levels increases the faster the read-out node processes the data from each pixel. Increasing the sensor resolution or frame rate increases the read-out speed, resulting in higher noise levels and decreased sensitivity. CCD sensors typically contain a single read-out node whereas CMOS sensors contain thousands. Due to their inherent bottleneck that develops as the frame rate increases, so does the read noise of a CCD sensor. In CMOS sensor architecture, this bottleneck does not exist, as they are able to read out more pixels at higher frame rates while maintaining a very low read noise. Recent advances in CMOS design have increased quantum efficiency, while offering lower noise, improved frames rates, resolution, and dynamic range compared to CCD sensors.

Show answer What is a Back Side Illuminated (BSI) CMOS sensor?

CMOS sensors are made from silicon wafers. As the light strikes the silicon, a charge is built up by a process called the photovoltaic effect. Photons only penetrate a few microns into the silicon, so the photovoltaic charge only accumulates on the surface. In order to move this charge to the read-out nodes, a thin layer of electronics is required. To manufacture the sensor, a layer of electronics must be applied to the photosensitive surface, but this blocks some of the light from reaching the silicon. Micro lenses can increase the QE of a front illuminated sensor, enabling a maximum QE value of around ~80%. Back-thinned or Back Side Illuminated (BSI) sensors overcome this limitation by polishing away the thick layer of excess silicon sensor on the back of the sensor and flipping the sensor round so that the “back” of the sensor is exposed to light. As the silicon is so thin, the electronics on the other side are still able to move the accumulated charge to the read-out node. As back-thinned sensors no longer have a layer of electronics between the photo sensitive silicon and the incoming light, the QE can increase up to 95%, offering significantly greater sensitivity.

Scroll to top