Spectroscopy
With analytical advances, the clinical practice threshold for lead has dropped from 60 to 5 μL/dL.
The development of analytical instrumentation over the past 50 years has allowed us not only to detect trace metals at the parts per quadrillion (ppq) levels, but also to know the oxidation state, biomolecular form, elemental species, and isotopic constituents. Here, we look at how the development of atomic spectroscopy techniques has enabled a much better understanding of the links between trace metal toxicity and human disease, and, in particular, the role of lead in the health of young children.
Understanding the effects of trace metals on human health is as complex as it is fascinating. Too low or too high a concentration of essential trace elements in our diet can affect our quality of life. On the other hand, metallic contamination of the air, soil, and water can have a dramatic impact on our well-being. There are many examples that highlight both the negative and positive effects of trace metals on our lives. For instance, the effect of lead toxicity, particularly on young children, is well documented, but is it possible to pinpoint the source of the lead poisoning? The movie "Erin Brockovich" alarmed moviegoers about the dangers of hexavalent chromium (Cr VI) in drinking water, but how many in the audience realized that trivalent chromium (CRIII) metal is necessary for the metabolism of carbohydrates and fats? Dr. Oz recently alarmed his viewers about high levels of arsenic in apple juice, but what he failed to say was that it was not the highly toxic inorganic form of arsenic that had been found, but organic arsenic that had been metabolized by the apple tree to a less toxic form. Selenium, which is found in many vegetables including garlic and onions, has important antioxidant properties, but do we know why some selenium compounds are essential, while others are toxic? Clearly these are all complex questions that have to be answered to fully understand the role of trace elements in the mechanisms of human diseases. Atomic spectroscopy has an important role to play in answering these questions.
The development of analytical instrumentation over the past 50 years has allowed us not only to detect trace metals at the parts per quadrillion (ppq) levels, but also to know their oxidation state, biomolecular form, elemental species, and isotopic constituents. We take for granted all the powerful and automated analytical tools we have at our disposal to carry out trace elemental studies on clinical, toxicological, and environmental samples. However, it wasn't always that way. As recently as the 1960s, the majority of trace elemental determinations were predominantly carried out by traditional wet chemical methods like volumetric-, gravimetric-, or colorimetric-based assays. In fact, the pharmaceutical industry has been using a sulfide precipitation colorimetric test for the measurement of lead and other heavy metals for more than 100 years; that method was only replaced in the United States Pharmacopeia (USP) in January 2018 by a plasma spectrochemical test (1).
It wasn't until the development of atomic spectroscopic techniques in the early to mid-1960s that the clinical analytical community realized they had a highly sensitive and diverse trace element technique that could be automated. Every time a major development was made in atomic spectroscopy, beginning with flame atomic emission (FAE) and flame atomic absorption (FAA) in the early 1960s, electrothermal atomization (ETA) or graphite furnace atomic absorption (GFAA) in the early 1970s, inductively coupled plasma–optical emission spectrometry (ICP-OES) in the late 1970s, and inductively coupled plasma–mass spectrometry (ICP-MS) in the early 1980s, trace element detection capability, sample throughput, and automation dramatically improved. There is no question that developments and breakthroughs in atomic spectroscopy have directly impacted our understanding of the way trace metals interact with the human body. Let us now take a look at a specific examples where atomic spectroscopy techniques have allowed us to delve deeper into understanding the impact of trace metal toxicity on our lives, focusing specifically on lead (Pb).
Lead has no known biological or physiological purpose in the human body, but is readily absorbed into the system by ingestion, inhalation, and, to a lesser extent, by skin absorption (2). Inorganic lead in submicrometer-sized particles in particular can be almost completely absorbed through the respiratory tract, and larger particles may be swallowed. The extent and rate of absorption of lead through the gastrointestinal tract depend on characteristics of the individual and on the nature of the medium ingested. It has been shown that children can absorb 40–50% of an oral dose of water-soluble lead compared to only 3–10% for adults (3). Young children and toddlers are particularly susceptible because of their playing and eating habits, and because they typically have more hand-to-mouth activity than adults (4). Lead is absorbed more easily if there is a calcium or iron deficiency, or if the child has a high fat, inadequate mineral, or low protein diet. When absorbed, lead is distributed in the body in three main areas: bones, blood, and soft tissue. About 90% is distributed in the bones, while the majority of the rest gets absorbed into the bloodstream, where it gets taken up by porphyrin molecules (complex nitrogen-containing organic compounds providing the foundation structure for hemoglobin) in the red blood cells (5). It is, therefore, clear that the repercussions and health risks are potentially enormous if children are exposed to abnormally high levels of lead.
The toxic effects of lead have recently been exemplified by the drinking water crisis in Flint, Michigan, where public health officials and water authority personnel failed to take remedial action when they replaced Lake Michigan with the Flint River as the source of the city's drinking water, a change that resulted in corrosion of lead pipes and high levels of lead in the drinking water supply. This particular problem is still being investigated, but the Centers for Disease Control and Prevention (CDC), recently reported that at least four million households have children living in them who are being exposed to high levels of lead from a combination of old lead paint and lead water pipes. As a result, there are approximately half a million U.S. children 1–5 years of age with blood lead levels (BLL) in excess of 5 micrograms per deciliter (µg/dL), the level at which CDC recommends remedial actions be taken (6).
Lead poisoning affects virtually every system in the body, and often occurs with no distinctive symptoms. It can damage the central nervous system, kidneys, and reproductive system and, at higher levels, can cause coma, convulsions, and even death. Even low levels of lead are harmful and are associated with lower intelligence, reduced brain development, decreased growth and impaired hearing (7). The level of lead in a person's system is confirmed by a blood-lead test, and by today's standards a blood lead level is considered elevated if it is in excess of 5 µg/dL (50 ppb) for children (8). However, the long-term effects of lead poisoning have not always been well understood. In the early and mid-1960s, remedial action would be taken if a blood lead level (or clinical practice threshold level, as it was known then) was in excess of 60 µg/dL. As investigators discovered more sensitive detection systems and designed better studies, the generally recognized level for lead toxicity has progressively shifted downward. In 1970, it was lowered to 40 µg/dL and, by 1978, the level had been reduced to 30 µg/dL. In 1985, the CDC published a threshold level of 25 µg/dL, which they eventually lowered to 10 µg/dL in 1991. It stayed at this level until it was reduced to 5 µg/dL in 2012. However, as our understanding of disease improves and measurement technology gets more refined, this level could be pushed even lower in the future (9). Figure 1 shows the trend in blood lead levels considered elevated by the Centers for Disease Control (CDC), since the mid-1960s.
Figure 1: The trend in blood lead levels (µg/dL) in children considered elevated by the Centers for Disease Control and Prevention (CDC), since the mid-1960s.
Note that the term blood lead reference value (BLRV) has been used more recently (since 2012), and refers specifically to the 97.5th percentile of blood lead levels for children 1–5 years old in the United States, calculated from blood lead tests performed in the National Health and Nutrition Examination Survey (NHANES). The BLRV is not a health-based toxicity threshold, nor does it define what level is considered normal. It is intended to help identify the highest risk childhood populations and geographic areas.
It is also important to point out that these thresholds were not all determined the same way. Only in 2012 (when the recognized level for lead toxicity was lowered to 5 µg/dL) was the population-based threshold called the BLRV and calculated from population statistics. Although all these levels could be said to describe thresholds of elevated blood lead levels generally, even the term elevated blood lead level wasn't specifically defined in CDC policy until 1978.
Currently, the major source of lead poisoning among children comes from lead-based household paints, which were used until they were banned in the United States in 1978 by the Consumer Product Safety Commission. Prior to that date, leaded gasoline was the largest pollutant, before it was completely removed from the pumps in 1995. Other potential sources include lead pipes used in drinking water systems, airborne lead from smelters, and clay pots, pottery glazes, lead batteries, and household dust. However, awareness of the problem, combined with preventative care and regular monitoring, have reduced the percentage of children aged 1–5 years with elevated blood levels (≥5 µg/dL) in the US from 26% in the early-mid 1990s to less than 2% in 2014. These data were taken from a recent National Health and Nutrition Examination Survey (NHANES) report (10).
There is no question that the routine monitoring of children has had a huge impact in reducing the number of children with elevated blood levels. Lead assays were initially carried out using the dithizone colorimetric method, which was sensitive enough, but very slow and labor intensive. The method became a little more automated when anodic stripping voltammetry was developed (11), but blood-lead analysis was not considered a truly routine method until atomic spectroscopy techniques became available. Let's take a more detailed look at how improvements in atomic spectroscopy instrumentation detection capability have helped to lower the number of children with elevated blood lead levels, since atomic absorption was first commercialized in the early 1960s.
When flame atomic absorption (FAA) was first developed, the elevated blood lead level was set at 60 µg/dL. Even though this level is equivalent to 600 parts per billion (ppb) of lead, which was well above the FAA detection limit of ~20 ppb at the time, FAA struggled to accurately detect lead at these levels when sample preparation was taken into consideration. The preparation of blood samples typically involved either dilution with a weak acid followed by centrifuging or filtering, or acid digestion followed by dilution and either centrifuging or filtering. More recently, dilution with a strong base like tetramethylammonium hydroxide (TMAH) and the addition of a surfactant to allow for easier aspiration has been used. When sample preparation was factored into the equation, a blood lead level of 600 ppb was reduced to 10–20 ppb, virtually the same as the FAA instrumental detection limit.
To get around this limitation, an accessory called the Delves Cup was developed in the late 1960s to improve the detection limit of FAA (12). The Delves Cup approach uses a metal crucible or boat, usually made from nickel or tantalum, which was positioned over the flame. The sample, typically 0.1–1.0 mL, is pipetted into the cup, where the heated sample vapor is passed into a quartz tube, which is also heated by the flame. The ground state atoms generated from the heated vapor are concentrated in the tube, and therefore resident in the optical path for a longer period of time, resulting in much higher sensitivity and about 100x lower detection limits. The Delves Cup became the standard method for carrying out blood lead determinations for many years, because of its relative simplicity and low cost of operation.
Unfortunately, the Delves Cup approach was found to be very operator dependent, not very reproducible (because of manual pipetting), and required calibration with blood matrix standards (13). The technique became less attractive when electrothermal atomization (ETA) was commercialized in the early 1970s. This new approach offered a detection capability for lead of ~ 0.1 ppb, approximately 200x better than FAA. However, its major benefit for the analysis of blood samples was the ability to dilute and inject the sample automatically into the graphite tube with very little off-line sample preparation. In addition, because the majority of the matrix components were "driven-off" prior to atomization at ~3000 °C, interferences were generally less than the Delves Cup, which only reached the temperature of the air or acetylene flame at ~2000 °C. This breakthrough meant that blood lead determinations, even at extremely low levels, could now be carried out in an automated fashion with relative ease.
The next major milestone in AA was the development of Zeeman background correction (ZBGC) in 1981, which compensated for non-specific absorption and structured background produced by complex biological matrices, like blood and urine. ZBGC, in conjunction with the stabilized temperature platform furnace (STPF) concept, allowed for virtually interference-free graphite furnace analysis of blood samples using aqueous calibrations (14). Such was the success of the ZBGC–STPF approach, due primarily to the fact that it could be used to analyze many different kinds of samples using simple aqueous standards, that it became the recognized way of analyzing most types of complex matrices by ETA.
Even though ETA had been the accepted way of doing blood lead determinations for more than 15 years, the commercialization of quadrupole-based ICP–MS in 1983 gave analysts a tool that was not only 100x more sensitive, but suffered from less severe matrix-induced interferences than ETA. In addition, ICP–MS offered multielement capability and much higher sample throughput. These features made ICP–MS very attractive to the clinical community, such that many labs converted to ICP–MS as their main technique for trace element analysis. Then, as the technique matured, using advanced mass separation devices, performance enhancing tools, powerful interference reduction techniques, and more-flexible sampling accessories, detection limits in real-word samples improved dramatically for some elements. Figure 2 shows the improvement in detection capability (in ppb) of ICP-MS compared to ETA and the other atomic spectroscopy techniques.
Figure 2: Comparison of detection capability (ppb) of atomic spectroscopy techniques used to monitor blood lead and the approximate year they were developed or improved.
It should also be emphasized that the detection limits shown in Figure 2 are instrument detection limits (IDLs), which are based on simplistic calculations of aqueous blanks carried out by manufacturers, and not realistic method detection limit (MDL) or procedural limits of detection (PLOD) that take into consideration the sample preparation procedure, dilution steps, and multiple analytical measurements. IDLs are also only intended to be used as a guideline for comparison purposes because there are so many different ways of assessing detection capability, based on variations in manufacturer, instrument design, and methodology.
Figure 3 is a combination of Figures 1 and 2, and shows improvement in the blood lead method detection limit (now in µg/dL and not ppb) offered by atomic spectroscopy techniques compared to the trend in blood lead levels set by the CDC. To make the comparison more valid, a factor of 100x has been applied to the instrumental detection limits to give an approximation of the achievable "real world" method detection limit in a blood sample matrix. Both plots are shown in log scale, so they can be viewed on the same graph. The main purpose of these data is to show how the blood lead levels considered by the CDC as "elevated" over the past 50 years have dropped as method detection limits of the various atomic spectroscopy techniques have been lowered, thus giving researchers more confidence in the integrity of their data.
Figure 3: The improvement in real-world method detection capability (in µg/dL) offered by atomic spectroscopy techniques for blood-lead determinations compared to the trend in blood-lead levels regulated by the Centers for Disease Control and Prevention (CDC).
It should also be emphasized that a degradation factor of 50–100x is quite normal when converting an IDL to an MDL, when characterizing samples by atomic spectroscopy techniques. However, when analyzing a very complex biological matrix like blood by ICP-MS, there are many different ways of calculating LODs to encompass the entire analytical procedure. One common approach to determine the PLOD is to carry out 20 runs and plot standard deviation of the standards and spiked matrix versus concentration, extrapolating the regression line to the ordinate axis, to determine the standard deviation at zero concentration (15). In a high throughput laboratory, this approach might not be realistic, because of the additional time taken. The time involved can be somewhat shortened by taking fewer readings, but doing so will clearly negatively impact the statistical data and detection limit. Whichever approach is used, one should take into account variability in sample preparation, environmental contamination, solvents, and reagents, as well as minor sampling errors from dilution or pipetting over many runs, all of which can cause variability from day to day. Given such variability, a real-world procedural LOD for Pb in blood is often three orders of magnitude worse than the instrument detection limit, and is typically around 0.01–0.07 µg/dL, depending on the type of ICP–MS technology and interference reduction technique used (9,16).
An added benefit of the ICP-MS technique is that it also offers isotopic measurement capability. This feature is very attractive to many clinical laboratories, because it gives them the ability to carry out isotope tracer (17), isotope dilution (18) and isotope ratio (19) measurements, which are beyond the realms of other atomic spectroscopy techniques. In fact, the isotopic measurement capability allows researchers to get a better understanding of the source of lead poisoning by measuring the isotope ratio of blood-lead samples and comparing them with possible sources of lead contamination. The principal behind this approach, known as isotopic fingerprinting, is based on the fact that lead is composed of four naturally occurring isotopes: 204Pb, 206Pb, 207Pb, and 208Pb, all with the same atomic number, but with different atomic masses. Thus, when naturally occurring lead is ionized in the plasma, it generates four ions, all with different atomic masses. Figure 4 shows a mass spectrum of the four lead isotopes 204Pb, 206Pb, 207Pb, and 208Pb, together with their relative natural abundances of 1.4%, 24.1%, 22.1% and 52.4%, respectively.
Figure 4: Mass spectrum of the four lead isotopes at 204, 206, 207, and 208 atomic mass units (amu), with their respective natural abundances.
All the lead isotopes, with the exception of 204Pb, are products of radioactive decay of either uranium or thorium, the abundance of which will vary slightly depending on the rock type and geological area. This means that in all lead-based materials and systems, 204Pb has essentially remained unchanged at 1.4%, since the earth was first formed (20). The ratios of the isotopic concentrations of 208Pb, 207Pb, and 206Pb to that of 204Pb will therefore vary, depending on the source of lead. This fundamental principle can be then be used to match lead isotope ratios in someone's blood to a particular environmental source of lead contamination.
However, there are known, well-understood limitations of this approach. For lead fingerprinting to be useful, potential sources of lead exposure must be limited in number and scope and the lead sources must be isotopically distinct. If more than two sources of environmental lead are likely, such as from water pipes, gasoline, smelter, paint, pottery, and glazes, then mixed or combined isotope ratios will occur and, as a result, no useful data will be obtained. In addition, if someone has chronic exposure to extremely high lead levels, the person might have brittle or broken bones, and accumulated lead in the bones would be released into the bloodstream, which could shift the lead equilibrium. This release of lead from the bones could elevate blood lead levels, independent of the source of the lead exposure or contamination.
A good example of using isotope ratios to pinpoint the source of lead poisoning that worked extremely well involved a study carried out on a group of people living in a small village near Mexico City (21). A number of the residents had abnormally high levels of lead in their blood, which came from one of two likely sources: the use of leaded gasoline, which had contaminated the soil, or glazed ceramic pots, which were used for cooking and eating purposes, or both. For this experiment, the lead isotope ratios were measured using an electrothermal vaporization (ETV) sampling accessory coupled to an ICP-MS instrument. In this sample device, a heated graphite tube, similar to the type used in ETA, is used to thermally pretreat the sample. But instead of using the tube to produce ground state atoms, its main function is to drive off the bulk of the matrix before the analytes are vaporized into the plasma for ionization and measurement by the mass spectrometer. The major benefit of ETV-ICP-MS for this application is that complex matrices like blood, gasoline, and pottery or clay material can be analyzed with very little interference from matrix components (22). An additional benefit with regard to taking blood samples is that typically only a 20–50 µL aliquot is required for analysis. Figure 5 represents a schematic of how the ETV–ICP–MS system works, showing the two distinct steps: prevaporization to drive off the matrix components, and vaporization to sweep the analyte vapor into the ICP–MS for analysis.
Figure 5: Schematic of ICP-MS coupled with an the electrothermal vaporization sampling accessory (ETV-ICP-MS), showing the two distinct stages: (a) prevaporization to drive off the matrix components and (b) vaporization to sweep the analyte vapor into the ICP-MS instrument for analysis. Adapted with permission from reference 22.
In the Mexican study, ETV-ICP-MS was then used to determine the lead isotope ratios of 208Pb, 207Pb, and 206Pb to that of 204Pb in blood samples from a group of residents. These ratios were then compared with the two likely sources of lead contamination from the cooking pots and the gasoline samples. Figure 6 shows a subset of data taken from the study. It shows a plot of the 206Pb: 204Pb ratio against the 207Pb:204Pb ratio for the blood, cookware, and gasoline samples. It can be seen from this plot that the data for the blood and cookware are grouped very tightly together around the theoretical value of the ratios (known as the primeval lead value), while the gasoline data are grouped together on their own.
Figure 6: A plot of the ratio of 206Pb:204Pb, against the ratio of 207Pb:204Pb for blood (â»), cookware (O), and gasoline (•) samples, showing the theoretical (primeval) lead line. Adapted with permission from reference 21.
Based on principal component analysis of the data, this result confirms that the lead isotope ratios of the blood and cooking pots are almost identical, and are very close in composition to primeval lead, with very little addition of radiogenic lead (produced from radioactive decay). On the other hand, the alkyl lead compounds used in the production of leaded gasoline are from a different source of lead and as a result generate a very different isotopic signature. These data showed very convincing evidence that the residents of this small Mexican village were getting poisoned by the glazed clay pots they were using for cooking and eating, and not from contamination of the environment by leaded gasoline, as was first suspected.
There is no question that developments in atomic spectroscopy have helped us better understand the toxicity effects of lead over the past 50 years. Atomic spectroscopy advances have allowed us to lower the clinical practice threshold level of 60 µL/dL in the mid-1960s to the current blood lead reference value (BLRV) of 5 µL/dL. More importantly, these techniques have helped to reduce elevated blood levels of children in the United States from 26% in the early to mid-1990s to less than 2% in 2014, as well as allowing us to get a much better understanding of the environmental sources of lead contamination. However, such is the power and versatility of modern atomic spectroscopy instrumentation and its accessories, that it has also dramatically improved our understanding of other trace metal-related human diseases. The toxic effects of trivalent/pentavalent arsenic and hexavalent chromium or the nutritional benefits of different selenium species would still be relatively unknown, if it weren't for the continual improvements in ICP-MS and, in particular, its use as a very sensitive detector for trace element speciation studies using chromatographic separation technology. Even though ICP-MS has been successfully applied to many application areas since it was first commercialized in 1983, its use as a biomedical, clinical, and toxicological research tool has had a direct impact on the quality of many people's lives.
I would like to thank Dr. Steve Pappas of the CDC for his thoughtful comments and edits to this month's Atomic Perspectives column.
(1) Elemental Impurities in Pharmaceuticals: Updates: United States Pharmacopeai (USP) Website: http://www.usp.org/chemical-medicines/elemental-impurities-updates.
(2) J. Savory and M.R. Willis, Clin. Chem. 40, 1387 (1994).
(3) Agency for Toxic Substances and Disease Registry (ATSDR), Toxicological Profile for Lead, Section 3.3: Toxicokinetics, August, 2007, (https://www.atsdr.cdc.gov/toxprofiles/TP.asp?id=96&tid=22).
(4) Preventing Lead Poisoning in Young Children, Chapter 2: Absorption of Lead, Centers for Disease Control and Prevention (CDC), 1991, https://www.cdc.gov/nceh/lead/publications/books/plpyc/contents.htm.
(5) H. L. Needham, Case Studies in Environmental Medicine-Lead Toxicity, U. S. Dept. of Health and Human Services (1990).
(6) Preventing Lead Poisoning in Young Children, Lead Information Page, Centers for Disease Control and Prevention (CDC), https://www.cdc.gov/nceh/lead/default.htm.
(7) Childhood Blood Lead Levels in Children Aged <5 Years: United States, 2009–2014, Morbidity and Mortality Weekly Report (MMWR), Surveillance Summaries /January 20, 2017/66 (3); 1–10, https://www.cdc.gov/mmwr/volumes/66/ss/ss6603a1.htm.
(8) CDC Response to Advisory Committee on Childhood Lead Poisoning Prevention Recommendations in "Low Level Lead Exposure Harms Children: A Renewed Call of Primary Prevention" (2012), https://www.cdc.gov/nceh/lead/ACCLPP/blood_lead_levels.htm.
(9) Record of Proceedings from the Meeting of the Lead Poisoning Prevention Subcommittee of the NCEH/ATSDR Board of Scientific Counselors, Centers for Disease Control and Prevention (CDC), Atlanta, GA, September 19, 2016.
(10) Centers for Disease Control and Prevention (CDC), Morbidity and Mortality Weekly Report (MMWR), October 7, 2016/65(39); 1089, Source: The National Health and Nutrition Examination Survey (NHANES); http://www.cdc.gov/nchs/nhanes/index.htm.
(11) S. Constantini, R. Giordano, M. Rubbing. J. Microchemistry 35, 70 (1987).
(12) H.T. Delves, Analyst 95, 431 (1970).
(13) S. Cabet, J.M. Ottoway, and G.S. Fell, Proc. Analyt. Div. Chem. Soc. 300 (1977).
(14) W. Slavin, Sci. Total Environ. 71, 17 (1988).
(15) J.K. Taylor, Quality Assurance of Chemical Measurements (CRC Press, Boca Raton, Florida, 1st ed., 1987).
(16) D.R. Jones, J.M. Jarrett, D.S. Tevis, M. Franklin, N.J. Mullinix, K.L. Wallon, C.D. Quarles Jr., K.L. Caldwell, and R.L. Jones, Talanta 162, 114–122 (2017), https://www.sciencedirect.com/science/article/pii/S0039914016307305.
(17) B.T.G. Ting and M. Janghorbani, Anal. Chem. 58, 1334 (1986).
(18) J.W. McLaren, D. Beauchemin, and S.S. Berman, Anal. Chem. 59, 610 (1987).
(19) W.I. Manton, J. Toxicology 36, 7, 705 (1998).
(20) R.D. Russell and R.M. Farquhar, Lead Isotopes in Geology (Inter-Science Publishers Inc, New York, New York, 1960).
(21) M. Chaudhary-Webb, D.C. Paschal, W.C. Elliott, H.P. Hopkins, A.M. Ghazi, B.C. Ting, and I. Romieu, Atom. Spectrosc. 19, 5, 156 (1998).
(22) S. Beres, R. Thomas, E. Denoyer, and P. Bruckner, Spectroscopy 9(1), 20 (1994).
Robert Thomas
Robert Thomas is the principal of Scientific Solutions, a consulting company that serves the application and writing needs of the trace element user community. He has worked in the field of atomic and mass spectroscopy for more than 40 years, including 25 years for a manufacturer of atomic spectroscopic instrumentation. He has written almost 90 technical publications, including a 15-part tutorial series on ICP-MS. He recently completed his fourth textbook, entitled Measuring Elemental Impurities in Pharmaceuticals: A Practical Guide. He has an advanced degree in analytical chemistry from the University of Wales, UK, and is also a Fellow of the Royal Society of Chemistry (FRSC) and a Chartered Chemist (CChem). He has served on the ACS Committee on Analytical Reagents for the past 18 years as leader of the heavy metals' task force.