Scientists from Pusan National University in the Republic of Korea recently tested a new light detection and ranging (LiDAR)-based system for improving autonomous recognition systems. Their findings were later published in Scientific Reports (1).
Autonomous recognition systems have had massive growth in recent years, finding applications in fields like robotics and driving systems. These systems usually employ camera or light detection and ranging (LiDAR) systems to acquire surrounding environment information during driving. This allows for crucial feedback to be provided for navigation and safety strategies, such as routing and collision avoidance. When combined with time-of-flight (ToF) based light detection, LiDAR systems can allow for measuring object distances over hundreds of meters with centimeter access; this can be furthered when artificial intelligence (AI) is used alongside these systems. However, object recognition and classification through shape information is a system that can still suffer from inaccuracy and misidentification, such as ice on the road or distinguishing between real people and human-shaped objects.
Multi-spectral LiDAR systems have recently been introduced for overcoming these limitations by providing additional material information based on spectroscopic imaging.However, previous iterations of these systems typically employ spectrally resolved detection methods by using bandpass filters or complex dispersive optical systems, which have inherent limitations. For this experiment, the scientists proposed a time-division-multiplexing (TDM) based multi-spectral LiDAR system for semantic object inference by the simultaneous acquisition of spatial and spectral information. The TDM method, which implements spectroscopy by sampling pulses of different wavelengths in the time domain, can eliminate optical loss in dispersive spectroscopy while simultaneously providing a simple, compact, and cost-effective system.
According to the scientists, “By minimizing the time delay between the pulses of different wavelengths within a TDM burst, all pulses arrive at the same location during a scan, thereby collecting spectral information from the same spot on the object, which simplifies data processing for the object classification and allows maintaining a sufficient scan rate of the LiDAR system” (1). Regarding the TDM based multi-spectral LiDAR system used in this experiment, nanosecond pulse lasers with five different wavelengths (980 nm, 1060 nm, 1310 nm, 1550 nm, and 1650 nm) in the short-wave infrared (SWIR) range are utilized, covering a 670 nm bandwidth to acquire sufficient material-dependent differences in reflectance.
To demonstrate the system’s recognition performance, the scientists mapped the multi-spectral images of a human hand, a mannequin hand, a fabric gloved hand, a nitrile gloved hand, and a printed human hand onto a red, green and blue (RGB)-color encoded image. This image clearly visualizes spectral differences as RGB color depending on the material while having a similar shape. Further, the multi-spectral image’s classification performance was demonstrated using a convolution neural network (CNN) model based on the full multi-spectral data set.
Following the experiment, the scientists claimed their system minimized optical loss of the system while enabling simultaneous ranging of the target objects. With the multi-spectral images, a clear distinction was shown between different materials when mapped into an RGB-color encoded image; further, they are believed to be well-suited for systematic classification via CNN architecture with high accuracy, which makes full use of the spectral information. Though there is more research to be done, the scientists believe this technology offers potential for the development of compact multi-spectral LiDAR systems to enhance the safety and reliability of autonomous driving.
(1) Kim, S.; Jeong, T-I.; Kim, S.; Choi, E.; et al. Time Division Multiplexing Based Multi-Spectral Semantic Camera for LiDAR Applications. Scientific Reports 2024, 14, 11445. DOI: 10.1038/s41598-024-62342-2
FT-IR Spectroscopy for Microplastic Classification
December 19th 2024A new study in Infrared Physics & Technology highlights the pivotal role of Fourier transform infrared (FTIR) spectroscopy in identifying and quantifying microplastics, emphasizing its advantages, limitations, and potential for advancement in mitigating environmental pollution.
ATR FT-IR: A New Vision on Protein Structure and Aggregation
December 17th 2024A recent study by researchers from the University of Belgrade highlights the transformative potential of attenuated total reflectance Fourier transform infrared (ATR-FT-IR) spectroscopy for analyzing protein structures. This versatile method not only provides insights into secondary structures but also excels at tracking aggregation processes, offering advantages over traditional techniques like X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy.
Measuring Microplastics in Remote and Pristine Environments
December 12th 2024Aleksandra "Sasha" Karapetrova and Win Cowger discuss their research using µ-FTIR spectroscopy and Open Specy software to investigate microplastic deposits in remote snow areas, shedding light on the long-range transport of microplastics.
Advances in Mid-Infrared Imaging: Single-Pixel Microscopy Modernized with Quantum Lasers
December 10th 2024Scientists have developed a novel and creative mid-infrared (MIR) hyperspectral microscope using single-pixel imaging (SPI) technology and a quantum cascade laser (QCL). This innovation offers faster, more cost-effective chemical analysis compared to traditional methods, promising new frontiers in microscopic imaging.