On January 30, Ana Doblas of the University of Massachusetts, Dartmouth presented her team’s findings on how models based on deep learning (DL) can enhance different microscopy imaging infrastructures (1). The research was presented at SPIE Photonics West in San Francisco, California.
Deep learning is a facet of artificial intelligence (AI) that allows computers to process data in ways like the human brain. DL models can recognize complex patterns in pictures, text, and other types of data to “produce accurate insights and predictions” (2). Deep learning was initially used for techniques like imaging localization and segmentation, image tracking, and classification/diagnosis. Today, the technique can be used to assist signal-processing methods. Deep learning is now used for different microscopy applications, such as label-free prediction of 3D fluorescent images from brightfield microscopy, digital staining of microscopic images, and enhancing SIM reconstruction.
To demonstrate deep learning’s applicability in microscopy-related experiments, and how it can be used to capture real-time quantitative information, Doblas highlighted an example of confocal microscopy. This technique has been commonly used for live-cell imaging, cell and tissue analysis, disease diagnosis, and drug discovery. With deep learning, the goal is to improve the resolution of confocal microscopes. As part of this effort, Doblas and her team focused on implementing, training, and validating two learning-based models for resolution enhancement in confocal microscopy: conditional generative adversarial network (cGAN) and cycle generative adversarial network (cycleGAN). CGANs were traded and validated on simulated datasets with varying levels of noise, while cycleGANs were trained and validated on a simulated dataset and an experimental dataset.
The cGAN method was tested against two different resolutions, one high and one low. The cGAN model could generate high-resolution images with structural accuracy in conditions with a high signal-to-noise ratio (SNR). However, under noise conditions, the cGAN model cannot properly reproduce structural details. Conversely, cycleGAN does not need to be paired with a dataset to function, though it generates high-resolution (HR) images with less similarities than the cGAN model. The cGAN model can also reduce out-of-focus information and solve filaments of cells; for example, when made to do this task against experimental U373 cells, the predicted HR image from the trained cGAN model had an average accuracy around 0.65.
According to Doblas, there are many advantages to using digital holographic microscopy (DHM), which is a label-free imaging technique that is used for reconstructing complex wavefronts diffracted by a given specimen. Additionally, the system enables one to numerically refocus any axial plane of the sample, with off-axis DHM systems being suitable for live imaging, since this is a single-shot method. This type of system can be used in different material science scenarios, from dynamic topography to defect inspection to surface analysis and characterization. One type of DHM that was specifically covered in this presentation is off-axis DHM, which is based off interference between two waves and has the reconstruction stage of providing amplitude and phase images with a single image. This can lead to accurate imaging, but only if the reference wave is properly compensated.
Current automated methods do not allow for video rate imaging, but cGAN models can reconstruct quantitative phase imaging (QPI) images at video rate. This process is done by quantifying noise and the number of phase discontinuities, all of which leads to QPI images without phase distortions and reduced speckle noise. cGAN models also outperform traditional methods with holograms from other DHM systems.
Imaging modalities are important for many purposes, and integrating deep learning and imaging systems can help enhance their performances. cGAN and cycleGAN models can help provide high-resolution images in confocal scanning microscopy, with cGAN potentially being able to compensate and reconstruct DHM holograms without spatial filtering and compensation processes. CGAN-QPI models can reconstruct cells that other methods incorrectly handle and can be used on different DHM systems.
(1) Doblas, A.; Trujillo, C. Overview of Learning-Based Models to Enhance the Imaging Infrastructure. In SPIE Photonics West, San Francisco, California, USA, January 30–31, 2024.
(2) What is Deep Learning? Amazon Web Services 2024. https://aws.amazon.com/what-is/deep-learning/ (accessed 2024-2-5)
Young Scientist Awardee Uses Spectrophotometry and AI for Pesticide Detection Tool
November 11th 2024Sirish Subash is the winner of the Young Scientist Award, presented by 3M and Discovery education. His work incorporates spectrophotometry, a nondestructive method that measures the light of various wavelengths that is reflected off fruits and vegetables.