On January 30, Ana Doblas of the University of Massachusetts, Dartmouth presented her team’s findings on how models based on deep learning (DL) can enhance different microscopy imaging infrastructures (1). The research was presented at SPIE Photonics West in San Francisco, California.
Deep learning is a facet of artificial intelligence (AI) that allows computers to process data in ways like the human brain. DL models can recognize complex patterns in pictures, text, and other types of data to “produce accurate insights and predictions” (2). Deep learning was initially used for techniques like imaging localization and segmentation, image tracking, and classification/diagnosis. Today, the technique can be used to assist signal-processing methods. Deep learning is now used for different microscopy applications, such as label-free prediction of 3D fluorescent images from brightfield microscopy, digital staining of microscopic images, and enhancing SIM reconstruction.
To demonstrate deep learning’s applicability in microscopy-related experiments, and how it can be used to capture real-time quantitative information, Doblas highlighted an example of confocal microscopy. This technique has been commonly used for live-cell imaging, cell and tissue analysis, disease diagnosis, and drug discovery. With deep learning, the goal is to improve the resolution of confocal microscopes. As part of this effort, Doblas and her team focused on implementing, training, and validating two learning-based models for resolution enhancement in confocal microscopy: conditional generative adversarial network (cGAN) and cycle generative adversarial network (cycleGAN). CGANs were traded and validated on simulated datasets with varying levels of noise, while cycleGANs were trained and validated on a simulated dataset and an experimental dataset.
The cGAN method was tested against two different resolutions, one high and one low. The cGAN model could generate high-resolution images with structural accuracy in conditions with a high signal-to-noise ratio (SNR). However, under noise conditions, the cGAN model cannot properly reproduce structural details. Conversely, cycleGAN does not need to be paired with a dataset to function, though it generates high-resolution (HR) images with less similarities than the cGAN model. The cGAN model can also reduce out-of-focus information and solve filaments of cells; for example, when made to do this task against experimental U373 cells, the predicted HR image from the trained cGAN model had an average accuracy around 0.65.
According to Doblas, there are many advantages to using digital holographic microscopy (DHM), which is a label-free imaging technique that is used for reconstructing complex wavefronts diffracted by a given specimen. Additionally, the system enables one to numerically refocus any axial plane of the sample, with off-axis DHM systems being suitable for live imaging, since this is a single-shot method. This type of system can be used in different material science scenarios, from dynamic topography to defect inspection to surface analysis and characterization. One type of DHM that was specifically covered in this presentation is off-axis DHM, which is based off interference between two waves and has the reconstruction stage of providing amplitude and phase images with a single image. This can lead to accurate imaging, but only if the reference wave is properly compensated.
Current automated methods do not allow for video rate imaging, but cGAN models can reconstruct quantitative phase imaging (QPI) images at video rate. This process is done by quantifying noise and the number of phase discontinuities, all of which leads to QPI images without phase distortions and reduced speckle noise. cGAN models also outperform traditional methods with holograms from other DHM systems.
Imaging modalities are important for many purposes, and integrating deep learning and imaging systems can help enhance their performances. cGAN and cycleGAN models can help provide high-resolution images in confocal scanning microscopy, with cGAN potentially being able to compensate and reconstruct DHM holograms without spatial filtering and compensation processes. CGAN-QPI models can reconstruct cells that other methods incorrectly handle and can be used on different DHM systems.
(1) Doblas, A.; Trujillo, C. Overview of Learning-Based Models to Enhance the Imaging Infrastructure. In SPIE Photonics West, San Francisco, California, USA, January 30–31, 2024.
(2) What is Deep Learning? Amazon Web Services 2024. https://aws.amazon.com/what-is/deep-learning/ (accessed 2024-2-5)
AI, Deep Learning, and Machine Learning in the Dynamic World of Spectroscopy
December 2nd 2024Over the past two years Spectroscopy Magazine has increased our coverage of artificial intelligence (AI), deep learning (DL), and machine learning (ML) and the mathematical approaches relevant to the AI topic. In this article we summarize AI coverage and provide the reference links for a series of selected articles specifically examining these subjects. The resources highlighted in this overview article include those from the Analytically Speaking podcasts, the Chemometrics in Spectroscopy column, and various feature articles and news stories published in Spectroscopy. Here, we provide active links to each of the full articles or podcasts resident on the Spectroscopy website.
Diffuse Reflectance Spectroscopy to Advance Tree-Level NSC Analysis
November 28th 2024Researchers have developed a novel method combining near-infrared (NIR) and mid-infrared (MIR) diffuse reflectance spectroscopy with advanced data fusion techniques to improve the accuracy of non-structural carbohydrate estimation in diverse tree tissues, advancing carbon cycle research.