Because of its unique national security challenges, Israel has a long history of developing highly effective, state-of-the-art defense technologies and capabilities. A prime example of Israeli military strength is the Iron Dome air defense system[1], which has been widely touted as the world’s best defense against missiles and rockets[2].

However, on Oct. 7, 2023, Israel was caught off guard by a very large-scale missile attack by the Gaza-based Palestinian militant group Hamas. The group fired several thousand missiles[3] at a number of targets across Israel, according to reports. While exact details are not available, it is clear that a significant number of the Hamas missiles penetrated the Israeli defenses, inflicting extensive damage and casualties.

I am an aerospace engineer[4] who studies space and defense systems. There is a simple reason the Israeli defense strategy was not fully effective against the Hamas attack. To understand why, you first need to understand the basics of air defense systems.

Air defense: detect, decide, disable

An air defense system consists of three key components. First, there are radars to detect, identify and track incoming missiles. The range of these radars varies. Iron Dome’s radar is effective over distances of 2.5 to 43.5 miles (4 to 70 km)[5], according to its manufacturer Raytheon. Once an object has been detected by the radar, it must be assessed to determine whether it is a threat. Information such as direction and speed are used to make this determination.

If an object is confirmed as a threat, Iron Dome operators continue to track the object by radar. Missile speeds vary considerably, but assuming a representative speed of 3,280 feet per second (1 km/s), the defense system has at most one minute to respond to an attack.

a diagram showing the trajectory of a missile along with a radar system tracking the missile and a defensive missile intercepting the attacking missile
The fundamental elements of a missile defense system. Nguyen, Dang-An et al.[6], CC BY-NC[7]

The second major element of an air defense system is the battle control center. This component determines the appropriate way to engage a confirmed threat. It uses the continually updating radar information to determine the optimal response in terms of from where to fire interceptor missiles and how many to launch against an incoming missile.

The third major component is the interceptor missile itself. For Iron Dome, it is a supersonic missile with heat-seeking sensors. These sensors provide in-flight updates to the interceptor, allowing it to steer toward and close in on the threat. The interceptor uses a proximity fuse activated by a small radar to explode close to the incoming missile so that it does not have to hit it directly to disable it.

Limits of missile defenses

Israel has at least 10 Iron Dome batteries in operation[8], each containing 60 to 80 interceptor missiles. Each of those missiles costs about US$60,000. In previous attacks involving smaller numbers of missiles and rockets, Iron Dome was 90% effective against a range of threats.

So, why was the system less effective against the recent Hamas attacks?

It is a simple question of numbers. Hamas fired several thousand missiles, and Israel had less than a thousand interceptors in the field ready to counter them. Even if Iron Dome was 100% effective against the incoming threats, the very large number of the Hamas missiles meant some were going to get through.

The Hamas attacks illustrate very clearly that even the best air defense systems can be overwhelmed if they are overmatched by the number of threats they have to counter.

How Iron Dome works.

The Israeli missile defense has been built up over many years, with high levels of financial investment. How could Hamas afford to overwhelm it? Again, it all comes down to numbers. The missiles fired by Hamas cost about $600 each, and so they are about 100 times less expensive than the Iron Dome interceptors. The total cost to Israel of firing all of its interceptors is around $48 million. If Hamas fired 5,000 missiles, the cost would be only $3 million.

Thus, in a carefully planned and executed strategy, Hamas accumulated over time a large number of relatively inexpensive missiles that it knew would overwhelm the Iron Dome defensive capabilities. Unfortunately for Israel, the Hamas attack represents a very clear example of military asymmetry: a low-cost, less-capable approach was able to defeat a more expensive, high-technology system.

Future air defense systems

The Hamas attack will have repercussions for all of the world’s major military powers. It clearly illustrates the need for air defense systems that are much more effective in two important ways. First, there is the need for a much deeper arsenal of defensive weapons that can address very large numbers of missile threats. Second, the cost per defensive weapon needs to be reduced significantly.

This episode is likely to accelerate the development and deployment of directed energy air defense systems[9] based on high-energy lasers and high-power microwaves. These devices are sometimes described as having an “infinite magazine[10],” because they have a relatively low cost per shot fired and can keep firing as long as they are supplied with electrical power.

Read more

When you hear the word comet, you might imagine a bright streak moving across the sky. You may have a family member who saw a comet before you were born, or you may have seen one yourself when comet Nishimura passed by Earth[1] in September 2023. But what are these special celestial objects made of? Where do they come from, and why do they have such long tails?

As a planetarium director[2], I spend most of my time getting people excited about and interested in space. Nothing piques people’s interest in Earth’s place in the universe quite like comets. They’re unpredictable, and they often go undetected until they get close to the Sun. I still get excited when one comes into view.

What exactly is a comet?

Comets are leftover material from the formation of the solar system. As the solar system formed about 4.5 billion years ago[3], most gas, dust, rock and metal ended up in the Sun or the planets. What did not get captured was left over as comets and asteroids[4].

Because comets are[5] clumps of rock, dust, ice and the frozen forms of various gases and molecules, they’re often called[6] “dirty snowballs” or “icy dirtballs” by astronomers. Theses clumps of ice and dirt make up what’s called the comet nucleus.

A diagram showing comet nuclei, which look like gray rocks, of progressively larger sizes.
Size comparison of various comet nuclei. NASA, ESA, Zena Levy (STScI)[7]

Outside the nucleus is a porous, almost fluffy layer of ice, kind of like a snow cone. This layer is surrounded by a dense crystalline crust[8], which forms when the comet passes near the Sun and its outer layers heat up. With a crispy outside and a fluffy inside, astronomers have compared comets to deep-fried ice cream[9].

Most comets are a few miles wide[10], and the largest known is about 85 miles[11] wide. Because they are relatively small and dark compared with other objects in the solar system, people can’t see them unless the comet gets close to the Sun.

Pin the tail on the comet

Starry sky with a comet in the mid left portion of the image and a tree in the foreground
Comet Hale-Bopp as seen from Earth in 1997. The blue ion tail is visible to the top left of the comet. Philipp Salzgeber[12], CC BY-ND[13]

As a comet moves close to the Sun, it heats up. The various frozen gases and molecules making up the comet change directly from solid ice to gas in a process called sublimation[14]. This sublimation process releases dust particles trapped under the comet’s surface.

The dust and released gas form a cloud around the comet called a coma. This gas and dust interact with the Sun to form two different tails[15].

The first tail, made up of gas, is called the ion tail[16]. The Sun’s radiation strips electrons from the gases in the coma, leaving them with a positive charge. These charged gases are called ions. Wind from the Sun then pushes these charged gas particles directly away from the Sun, forming a tail that appears blue in color. The blue color comes from large numbers of carbon monoxide[17] ions in the tail.

The dust tail forms from the dust particles released during sublimation. These are pushed away from the Sun by pressure caused by the Sun’s light[18]. The tail reflects the sunlight and swoops behind the comet as it moves, giving the comet’s tail a curve[19].

The closer a comet gets to the Sun, the longer and brighter its tail will grow. The tail can grow significantly longer than the nucleus and clock in around half a million miles long[20].

Where do comets come from?

All comets have highly eccentric orbits[21]. Their paths are elongated ovals with extreme trajectories that take them both very close to and very far from the Sun.

Comets’ orbits can be very long, meaning they may spend most of their time in far-off reaches of the solar system.

An object will orbit faster the closer it is[22] to the Sun, as angular momentum is conserved[23]. Think about how an ice skater spins faster[24] when they bring their arms in closer to their body – similarly, comets speed up when they get close to the Sun. Otherwise, comets spend most of their time moving relatively slowly through the outer reaches of the solar system.

A lot of comets likely originate in a far-out region of our solar system called the Oort cloud[25].

The Oort cloud is predicted to be a round shell of small solar system bodies[26] that surround the Earth’s solar system with an innermost boundary about 2,000 times farther from the Sun than Earth. For reference, Pluto is only about 40 times farther[27].

Sphere of small particles with a disk like structure in the middle. A tiny rectangle in the center points to a zoomed in image of the Sun and planet orbits
A NASA diagram of the Oort cloud’s structure. The term KBO refers to Kuiper Belt objects near where Pluto lies. NASA[28]

Comets from the Oort cloud take over 200 years to complete their orbits, a metric called the orbital period. Because of their long periods, they’re called long-period comets[29]. Astronomers often don’t know much about these comets until they get close to the inner solar system.

Short-period comets[30], on the other hand, have orbital periods of less than 200 years. Halley’s comet is a famous comet that comes close to the Sun every 75 years.

While that’s a long time for a human, that’s a short period for a comet. Short-period comets generally come from the Kuiper Belt[31], an asteroid belt out beyond Neptune and, most famously, the home of Pluto.

There’s a subset of short-period comets that get only to about Jupiter’s orbit at their farthest point from the Sun. These have orbital periods of less than 20 years and are called Jupiter-family comets[32].

Comets’ time in the inner solar system is relatively short, generally on the order of weeks to months[33]. As they approach the Sun, their tails grow and they brighten before fading on their way back to the outer solar system.

But even the short-period comets don’t come around often, and their porous interior means they can sometimes fall apart. All of this makes their behavior difficult to predict[34]. Astronomers can track comets when they are coming toward the inner solar system and make predictions based on observations. But they never quite know if a comet will get bright enough to be seen with the naked eye as it passes Earth, or if it will fall apart and fizzle out as it enters the inner solar system.

Either way, comets will keep people looking up at the skies for years to come.

Read more

Cancer arises when cells accumulate enough damage to change their normal behavior. The likelihood of accruing damage increases with age[1] because the safeguards in your genetic code that ensure cells function for the greater good of the body weaken over time.

Why, then, do children who haven’t had sufficient time to accumulate damage develop cancer?

I am a doctoral student[2] who is exploring the evolutionary origins of cancer. Viewed through an evolutionary lens, cancer develops from the breakdown of the cellular collaboration[3] that initially enabled cells to come together and function as one organism.

Cells in children are still learning how to collaborate. Pediatric cancer develops when rogue cells that defy cooperation emerge and grow at the body’s expense.

Adult versus pediatric cancer

The cells in your body adhere to a set of instructions defined by their genetic makeup[4] – a unique code that carries all the information that cells need to perform their specific function. When cells divide, the genetic code is copied and passed from one cell to another. Copying errors can occur in this process and contribute to the development of cancer.

In adults, cancer evolves through a gradual accrual of errors and damages in the genetic code. Although there are safeguards against uncontrolled cell growth[5] and repair mechanisms[6] to fix genetic errors, aging, exposure to environmental toxins and unhealthy lifestyle can weaken these protections and lead to the breakdown of tissues. The most common types of adult cancers, such as breast cancer[7] and lung cancer[8], often result from such accumulated damage.

In children, whose tissues are still developing, there is a dual dynamic between growth and cancer prevention. On one hand, rapidly dividing cells are organizing themselves into tissues in an environment with limited immune surveillance[9] – an ideal setting for cancer development. On the other hand, children have robust safeguards and tightly regulated mechanisms that act as counterforces against cancer and make it a rare occurrence.

Father carrying child with cancer wearing a bandana and holding a stuffed animal, talking to a health care provider
Although pediatric cancer is rare, it is a leading cause of death for children under 15 in the U.S. FatCamera/E+ via Getty Images[10]

Children seldom accumulate errors in their genetic code, and pediatric cancer patients have a much lower incidence of genetic errors[11] than adult cancer patients. However, nearly 10%[12] of pediatric cancer[13] cases in the U.S. are due to inherited genetic mutations. The most common heritable cancers arise from genetic errors that influence cell fate – that is, what a cell becomes – during the developmental stages before birth. Mistakes in embryonic cells accumulate in all subsequent cells after birth and can ultimately manifest as cancer.

Pediatric cancers can also spontaneously arise while children are growing. These are driven by genetic alterations distinct from those common in adults. Unlike in adults, where damage typically accumulates as small errors during cell division, pediatric cancers often result from large-scale rearrangements[14] of the genetic code. Different regions of the genetic code swap places, disrupting the cell’s instructions beyond repair.

Such changes frequently occur in tissues with constant turnover, such as the brain[15], muscles[16] and blood[17]. Unsurprisingly, the most prevalent[18] pediatric cancers often emerge from these tissues.

Genetic alterations are not a prerequisite for pediatric cancers. In certain pediatric brain cancers, the region of the genetic code responsible for cell specialization becomes permanently silenced[19]. Although there is no error in the genetic code itself, the cell is unable to read it. Consequently, these cells become trapped in an uncontrolled state of division, ultimately leading to cancer.

Tailoring treatments for pediatric cancer

Cells in children typically exhibit greater growth, mobility and flexibility. This means that pediatric cancer is often more invasive and aggressive[20] than that of adults, and can severely affect development even after successful therapy due to long-term damage. Because the cancer trajectories in children and adults are markedly different, treatment approaches should also be different for each.

Standard cancer therapy includes radiotherapy or chemotherapy, which affect both cancerous and healthy, actively dividing cells. If the patient becomes unresponsive to these treatments, oncologists try a different drug.

In children, the side effects of certain treatments are amplified[21] since their cells are actively growing. Unlike adult cancers, where different drugs can target different genetic errors, pediatric cancers have fewer of these targets[22]. The rarity of pediatric cancer also makes it challenging to test new therapies in large-scale clinical trials.

Standard cancer treatments can lead to lifelong effects for pediatric patients.

A common reason for treatment failure is when cancer cells adapt to evade treatment and become drug resistant[23]. Applying principles from evolutionary biology to cancer treatment can help tackle this.

For example, extinction therapy[24] is an approach to treatment inspired by natural mass extinction events. The goal of this therapy is to eradicate all cancer cells before they can evolve. It does this by applying a “first strike” drug that kills most cancer cells. The remaining few cancer cells are then targeted through focused, smaller-scale interventions.

If complete extinction is not possible, the goal turns to preventing treatment resistance and keeping the tumor from progressing. This can be achieved with adaptive therapy[25], which takes advantage of the competition for survival among cancer cells. Treatment is dynamically turned “on” and “off” to keep the tumor stable while allowing cells that are sensitive to the therapy to out-compete and suppress resistant cells. This approach preserves the tissue[26] and improves survival.

Although pediatric cancer patients have a better prognosis than adults do after treatment, cancer remains the second-leading cause of death[27] in children under 15 in the U.S. Recognizing the developmental differences between pediatric and adult cancers and using evolutionary theory[28] to “anticipate and steer[29]” the cancer’s trajectory can enhance outcomes for children. This could ultimately improve young patients’ chances for a brighter, cancer-free future.

Read more

Everyone has a different tolerance for spicy food — some love the burn, while others can’t take the heat. But the scientific consensus on whether spicy food can have an effect — positive or negative — on your health is pretty mixed.

In September 2023, a 14-year-old boy died after consuming a spicy pepper as part of the viral “one chip challenge[1].” The Paqui One Chip Challenge uses Carolina Reaper and Naga Viper peppers, which are among the hottest peppers in the world[2].

While the boy’s death is still under examination by health officials, it has gotten some of the spicy chips being used in these challenges removed from stores[3].

A cardboard display at a gas station reading 'One Chip Challenge Real Peppers Real Heat' with several bags and boxes of 'Paqui' brand chips.
Many stores have removed the Paqui One Chip Challenge chips from their shelves. AP Photo/Steve LeBlanc[4]

As an epidemiologist[5], I’m interested in how spicy food can affect people’s health and potentially worsen symptoms associated with chronic diseases like inflammatory bowel disease. I am also interested in how diet, including spicy foods, can increase or decrease a person’s lifespan.

The allure of spicy food

Spicy food can refer to food with plenty of flavor from spices, such as Asian curries, Tex-Mex dishes or Hungarian paprikash. It can also refer to foods with noticeable heat from capsaicin[6], a chemical compound found to varying degrees in hot peppers[7].

As the capsaicin content of a pepper increases, so does its ranking on the Scoville scale[8], which quantifies the sensation of being hot.

Capsaicin tastes hot because it activates certain biological pathways[9] in mammals – the same pathways activated by hot temperatures[10]. The pain produced by spicy food can provoke the body[11] to release endorphins and dopamine. This release can prompt a sense of relief or even a degree of euphoria.

In the U.S., the U.K. and elsewhere, more people than ever are consuming spicy foods[12], including extreme pepper varieties.

Hot-pepper-eating contests and similar “spicy food challenges” aren’t new, although spicy food challenges have gotten hotter – in terms of spice level and popularity on social media[13].

Hot peppers like the Carolina Reaper can induce sweating and make the consumer feel like their mouth is burning.

Short-term health effects

The short-term effects of consuming extremely spicy foods range from a pleasurable sensation of heat to an unpleasant burning sensation[14] across the lips, tongue and mouth. These foods can also cause various forms of digestive tract discomfort[15], headaches and vomiting[16].

If spicy foods are uncomfortable to eat, or cause unpleasant symptoms like migraines, abdominal pain and diarrhea, then it’s probably best to avoid those foods. Spicy food may cause these symptoms in people with inflammatory bowel diseases[17], for example.

Spicy food challenges notwithstanding, for many people across the world, consumption of spicy food is part of a long-term lifestyle influenced by geography and culture[18].

For example, hot peppers grow in hot climates, which may explain why many cultures in these climates use spicy foods[19] in their cooking. Some research suggests that spicy foods help control foodborne illnesses[20], which may also explain cultural preferences for spicy foods[21].

A plant growing several green chile peppers in a field.
Chile peppers growing in Mexico. AP Photo/Andres Leighton[22]

Lack of consensus

Nutritional epidemiologists have been studying the potential risks and benefits of long-term spicy food consumption for many years. Some of the outcomes examined[23] in relation to spicy food consumption include obesity[24], cardiovascular disease[25], cancer[26], Alzheimer’s disease[27], heartburn and ulcers[28], psychological health[29], pain sensitivity[30] and death from any cause[31] – also called all-cause mortality.

These studies report mixed results, with some outcomes like heartburn more strongly linked to spicy food consumption. As can be expected with an evolving science, some experts are more certain about some of these health effects than others.

For example, some experts state with confidence that spicy food does not cause stomach ulcers[32], whereas the association with stomach cancer[33] isn’t as clear.

When taking heart disease, cancer and all other causes of death in a study population into consideration, does eating spicy food increase or decrease the risk of early death?

Right now, the evidence from large population-based studies suggests that spicy food does not increase the risk of all-cause mortality among a population and may actually decrease the risk[34].

However, when considering the results of these studies, keep in mind that what people eat is one part of a larger set of lifestyle factors – such as physical activity, relative body weight and consumption of tobacco and alcohol – that also have health consequences.

It’s not easy for researchers to measure diet and lifestyle factors accurately in a population-based study, at least in part because people don’t always remember or report their exposure[35] accurately. It often takes numerous studies conducted over many years to reach a firm conclusion about how a dietary factor affects a certain aspect of health.

Scientists still don’t entirely know why so many people enjoy spicy foods[36] while others do not, although there is plenty of speculation[37] regarding evolutionary, cultural and geographic factors, as well as medical, biological and psychological ones[38].

One thing experts do know, however, is that humans are one of the only animals that will intentionally eat something spicy enough to cause them pain, all for the sake of pleasure[39].

Read more

Each October, the Nobel Prizes celebrate a handful of groundbreaking scientific achievements. And while many of the awarded discoveries revolutionize the field of science, some originate in unconventional places. For George de Hevesy[1], the 1943 Nobel Laureate in chemistry who discovered radioactive tracers, that place was a boarding house cafeteria in Manchester, U.K., in 1911.

A black and white headshot of a young man with a mustache wearing a suit.
Hungarian chemist George de Hevesy. Magnus Manske[2]

De Hevesey had the sneaking suspicion that the staff of the boarding house cafeteria where he ate at every day was reusing leftovers from the dinner plates – each day’s soup seemed to contain all of the prior day’s ingredients. So he came up with a plan to test his theory.

At the time, de Hevesy was working with radioactive material. He sprinkled a small amount[3] of radioactive material in his leftover meat. A few days later, he took an electroscope with him to the kitchen and measured the radioactivity[4] in the prepared food.

His landlady, who was to blame for the recycled food, exclaimed “this is magic” when de Hevesy showed her his results, but really, it was just the first successful radioactive tracer experiment.

We are[5] a team of chemists[6] and physicists who work[7] at the Facility for Rare Isotope Beams[8], located at Michigan State University. De Hevesy’s early research in the field has revolutionized the way that modern scientists like us use radioactive material, and it has led to a variety of scientific and medical advances.

The nuisance of lead

A year before conducting his recycled ingredients experiment, Hungary-born de Hevesy had traveled to the U.K.[9] to start work with nuclear scientist Ernest Rutherford[10], who’d won a Nobel Prize just two years prior.

Rutherford was at the time working with a radioactive substance[11] called radium D, a valuable byproduct of radium because of its long half-life[12] (22 years). However, Rutherford couldn’t use his radium D sample, as it had large amounts of lead mixed in.

When de Hevesy arrived, Rutherford asked him to separate the radium D[13] from the nuisance lead. The nuisance lead was made up of a combination of stable isotopes of lead (Pb). Each isotope had the same number of protons (82 for lead), but a different number of neutrons.

De Hevesy worked on separating the radium D from the natural lead using chemical separation techniques for almost two years, with no success[14]. The reason for his failure was that, unknown to anyone at the time, radium D was actually a different form of lead – namely the radioactive isotope, or radioisotope Pb-210.

Nevertheless, de Hevesy’s failure led to an even bigger discovery. The creative scientist figured out that if he could not separate radium D from natural lead, he could use it as a tracer of lead.

Radioactive isotopes[15], like Pb-210, are unstable isotopes, which means that over time they will transform into a different element. During this transformation, called radioactive decay, they typically release particles or light, which can be detected as radioactivity[16].

Through radioactivity, an unstable isotope can turn from one element to another.

This radioactivity acts as a signature indicating the presence of the radioactive isotope. This critical property of radioisotopes allows them to be used as tracers.

Radium D as a tracer

A tracer[17] is a substance that stands out in a crowd of similar material because it has unique qualities that make it easy to track.

For example, if you have a group of kindergartners going on a field trip and one of them is wearing a smartwatch, you can tell if the group went to the playground by tracking the GPS signal on the smartwatch. In de Hevesy’s case, the kindergartners were the lead atoms, the smart watch was radium D, and the GPS signal was the emitted radioactivity.

In the 1910s, the Vienna Institute of Radium Research[18] had a larger collection of radium[19] and its byproducts than any other institution. To continue his experiments with radium D, de Hevesy moved to Vienna in 1912.

He collaborated with Fritz Paneth, who had also attempted the impossible task of separating radium D from lead without success. The two scientists “spiked” samples of different chemical compounds with small amounts of a radioactive tracer. This way they could study chemical processes by tracking the movement of the radioactivity across different chemical reactions[20]

De Hevesy continued his work studying chemical processes using different isotopic markers for many years. He even was the first to introduce nonradioactive tracers. One nonradioactive tracer he studied was a heavier isotope of hydrogen, called deuterium[21]. Deuterium is 10,000 times less abundant than common hydrogen, but is roughly twice as heavy, which makes it easier to separate the two.

De Hevesy and his co-author used deuterium to track water in their bodies. In their investigations, they took turns ingesting samples and measuring the deuterium in their urine to study the elimination of water[22] from the human body.

De Hevesy was awarded the 1943 Nobel Prize in chemistry[23] “for his work on the use of isotopes as tracers in the study of chemical processes.”

Radioactive tracers today

More than a century after de Hevesy’s experiments, many fields now routinely use radioactive tracers, from medicine to materials science and biology.

These tracers can monitor the progression of disease in medical procedures[24], the uptake of nutrients in plant biology[25], the age and flow of water in aquifers[26] and the measurement of wear and corrosion of materials[27], among other applications. Radioisotopes allow researchers to follow the paths of nutrients and drugs in living systems without invasively cutting the tissue.

Four brain scans, two in contrasted colors with the background shown as white and the brain as gray, two with the background shown as black and the brain shown either as gray or orange.
Radioactive tracers, seen in the top left photo as a white spot and indicated by an arrow in the top right, are often used today in brain scans. mr. suphachai praserdumrongchai/iStock via Getty Images[28]

In modern research, scientists focus on producing new isotopes and on developing procedures to use radioactive tracers more efficiently. The Facility for Rare Isotope Beams[29], or FRIB, where the three of us work, has a program dedicated to the production and harvesting of unique radioisotopes. These radioisotopes are then used in medical and other applications.

FRIB produces radioactive beams[30] for its basic science program. In the production process, a large number of unused isotopes are collected in a tank of water, where they can be later isolated and studied[31].

Two scientists, a woman wearing a white shirt and a man wearing a dark blue shirt, squat on the concrete ground in a laboartory with lots of machinery and shelves, and a green lit ceiling.
Scientists Greg Severin and Katharina Domnanich at the Facility for Rare Isotope Beams. Facility for Rare Isotope Beams.

One recent study involved the isolation of the radioisotope Zn-62[32] from the irradiated water. This was a challenging task considering there were 100 quadrillion times more water molecules than Zn-62 atoms. Zn-62 is an important radioactive tracer utilized to follow the metabolism of zinc in plants and in nuclear medicine.

Eighty years ago, de Hevesy managed to take a dead-end separation project and turn it into a discovery that created a new scientific field. Radioactive tracers have already changed human lives in so many ways. Nevertheless, scientists are continuing to develop new radioactive tracers and find innovative ways to use them.

Read more

The 2023 Nobel Prize for chemistry isn’t the[1] first Nobel[2] awarded for[3] research in[4] nanotechnology[5]. But it is perhaps the most colorful application of the technology to be associated with the accolade.

This year’s prize recognizes Moungi Bawendi[6], Louis Brus[7] and Alexei Ekimov[8] for the discovery and development of quantum dots[9]. For many years, these precisely constructed nanometer-sized particles[10] – just a few hundred thousandths the width of a human hair in diameter – were the darlings of nanotechnology pitches and presentations. As a researcher[11] and adviser[12] on nanotechnology, I’ve even used them myself[13] when talking with developers, policymakers, advocacy groups and others about the promise and perils of the technology.

The origins of nanotechnology predate Bawendi, Brus and Ekimov’s work on quantum dots – the physicist Richard Feynman speculated on what could be possible through nanoscale engineering as early as 1959[14], and engineers like Erik Drexler were speculating about the possibilities of atomically precise manufacturing in the the 1980s[15]. However, this year’s trio of Nobel laureates were part of the earliest wave of modern nanotechnology where researchers began putting breakthroughs in material science to practical use[16].

Quantum dots brilliantly fluoresce[17]: They absorb one color of light and reemit it nearly instantaneously as another color. A vial of quantum dots, when illuminated with broad spectrum light, shines with a single vivid color. What makes them special, though, is that their color is determined by how large or small they are. Make them small and you get an intense blue. Make them larger, though still nanoscale, and the color shifts to red.

diagram of colorful circles of different sizes
The wavelength of light a quantum dot emits depends on its size. Maysinger, Ji, Hutter, Cooper[18], CC BY[19]

This property has led to many arresting images of rows of vials containing quantum dots of different sizes going from a striking blue on one end, through greens and oranges, to a vibrant red at the other. So eye-catching is this demonstration of the power of nanotechnology that, in the early 2000s, quantum dots became iconic of the strangeness and novelty of nanotechnology.

But, of course, quantum dots are more than a visually attractive parlor trick. They demonstrate that unique, controllable and useful interactions between matter and light can be achieved through engineering the physical form of matter – modifying the size, shape and structure of objects or instance – rather than playing with the chemical bonds between atoms and molecules. The distinction is an important one, and it’s at the heart of modern nanotechnology.

Skip chemical bonds, rely on quantum physics

The wavelengths of light that a material absorbs, reflects or emits are usually determined by the chemical bonds that bind its constituent atoms together. Play with the chemistry of a material[20] and it’s possible to fine-tune these bonds so that they give you the colors you want. For instance, some of the earliest dyes started with a clear substance such as analine[21], transformed through chemical reactions to the desired hue.

It’s an effective way to work with light and color, but it also leads to products that fade over time as those bonds degrade[22]. It also frequently involves using chemicals that are harmful to humans and the environment[23].

Quantum dots work differently. Rather than depending on chemical bonds to determine the wavelengths of light they absorb and emit, they rely on very small clusters of semiconducting materials[24]. It’s the quantum physics of these clusters[25] that then determines what wavelengths of light are emitted – and this in turn depends on how large or small the clusters are.

This ability to tune how a material behaves by simply changing its size is a game changer when it comes to the intensity and quality of light that quantum dots can produce, as well as their resistance to bleaching or fading, their novel uses and – if engineered smartly – their toxicity.

Of course, few materials are completely nontoxic, and quantum dots are no exception. Early quantum dots were often based on cadmium selenide for instance – the component materials of which are toxic. However, the potential toxicity of quantum dots needs to be balanced[26] by the likelihood of release and exposure and how they compare with alternatives.

people walk past colorful multi-screen display at a trade show
Quantum dots are now a normal part of many consumer items, including televisions. Soeren Stache/picture alliance via Getty Images[27]

Since its earlier days, quantum dot technology has evolved in safety and usefulness and has found its way into an increasing number of products, from displays[28] and lighting[29], to sensors[30], biomedical applications[31] and more. In the process, some of their novelty has perhaps worn off. It can be hard to remember just how much of a quantum leap the technology is that’s being used to promote the latest generation of flashy TVs[32], for instance.

And yet, quantum dots are a pivotal part of a technology transition that’s revolutionizing how people work with atoms and molecules.

‘Base coding’ on an atomic level

In my book “Films from the Future: the Technology and Morality of Sci-Fi Movies[33],” I write about the concept of “base coding[34].” The idea is simple: If people can manipulate the most basic code that defines the world we live in, we can begin to redesign and reengineer it.

This concept is intuitive when it comes to computing, where programmers use the “base code” of 1,s and 0’s, albeit through higher level languages. It also makes sense in biology, where scientists are becoming increasingly adept at reading and writing the base code of DNA and RNA – in this case, using the chemical bases adenine, guanine, cytosine and thymine as their coding language.

This ability to work with base codes also extends to the material world. Here, the code is made up of atoms and molecules and how they are arranged in ways that lead to novel properties.

Bawendi, Brus and Ekimov’s work on quantum dots is a perfect example of this form of material-world base coding. By precisely forming small clusters of particular atoms into spherical “dots,” they were able to tap into novel quantum properties that would otherwise be inaccessible. Through their work they demonstrated the transformative power that comes through coding with atoms.

alt
An example of ‘base coding’ using atoms to create a material with novel properties is a single molecule ‘nanocar’ crafted by chemists that can be controlled as it ‘drives’ over a surface. Alexis van Venrooy/Rice University[35], CC BY-ND[36]

They paved the way for increasingly sophisticated nanoscale base coding that is now leading to products and applications that would not be possible without it. And they were part of the inspiration for a nanotechnology revolution[37] that is continuing to this day. Reengineering the material world in these novel ways far transcends what can be achieved through more conventional technologies.

This possibility was captured in a 1999 U.S. National Science and Technology Council report with the title Nanotechnology: Shaping the World Atom by Atom[38]. While it doesn’t explicitly mention quantum dots – an omission that I’m sure the authors are now kicking themselves over – it did capture just how transformative the ability to engineer materials at the atomic scale could be.

This atomic-level shaping of the world is exactly what Bawendi, Brus and Ekimov aspired to through their groundbreaking work. They were some of the first materials “base coders” as they used atomically precise engineering to harness the quantum physics of small particles – and the Nobel committee’s recognition of the significance of this is well deserved.

Read more

Electrons moving around in a molecule might not seem like the plot of an interesting movie. But a group of scientists will receive the 2023 Nobel Prize in physics[1] for research that essentially follows the movement of electrons[2] using ultrafast laser pulses, like capturing frames in a video camera.

However, electrons, which partly make up atoms[3] and form the glue that bonds atoms in molecules together, don’t move around on the same time scale people do. They’re much faster. So, the tools that physicists like me[4] use to capture their motion have to be really fast – attosecond-scale fast.

One attosecond[5] is one billionth of a billionth of a second (10⁻¹⁸ second) – the ratio of one attosecond to one second is the same as the ratio of one second to the age of the universe.

Attosecond pulses

In photography, capturing clear images of fast objects requires a camera with a fast shutter[6] or a fast strobe of light to illuminate the object. By taking multiple photos in quick succession, the motion of the object can be clearly resolved.

The time scale of the shutter or the strobe must match the time scale of motion of the object – if not, the image will be blurred. This same idea applies when researchers attempt to image the ultrafast motion of electrons[7]. Capturing attosecond-scale motion requires an attosecond strobe. The 2023 Nobel laureates in physics[8] made seminal contributions to the generation of such attosecond laser strobes, which are very short pulses generated using a powerful laser.

Imagine the electrons in an atom are constrained within the atom by a wall. When a femtosecond (10⁻¹⁵ second) laser pulse from a high-powered femtosecond laser is directed at atoms of a noble gas such as argon, the strong electric field in the pulse lowers the wall.

This is possible because the laser electric field is comparable in strength to the electric field of the nucleus of the atom. Electrons see this lowered wall and pass through in a bizarre process called quantum tunneling[9].

As soon as the electrons exit the atom, the laser’s electric field captures them, accelerates them to high energies and slams them back into their parent atoms. This process of recollision results in creation of attosecond bursts of laser light.

A diagram showing how electrons gain, then release energy when exposed to a laser's electric field, with a pink arrow showing the laser's energy and small drawings of spheres stuck together indicating the atom.
A laser’s electric field allows electrons to escape from the atom, gain energy and then release energy as they’re reabsorbed back into the atom. Johan Jarnestad/The Royal Swedish Academy of Sciences[10], CC BY-NC-ND[11]

Attosecond movies

So how do physicists use these ultrashort pulses to make movies of electrons at the attosecond scale?

Conventional movies are made one scene at a time, with each instant captured as a frame with video cameras. The scenes are then stitched together to form the complete movie.

Attosecond movies of electrons use a similar idea. The attosecond pulses act as strobes, lighting up the electrons so researchers can capture their image, over and over again as they move – like a movie scene. This technique is called pump-probe spectroscopy[12].

However, imaging electron motion directly inside atoms is currently challenging, though researchers are developing several approaches using advanced microscopes to make direct imaging possible[13].

Typically, in pump-probe spectroscopy, a “pump” pulse gets the electron moving and starts the movie. A “probe” pulse then lights up the electron at different times after the arrival of the pump pulse, so it can be captured by the “camera,” such as a photoelectron spectrometer[14].

Pump-probe spectroscopy.

The information on the motion of electrons, or the “image,” is captured using sophisticated techniques. For example, a photoelectron spectrometer detects how many electrons were removed from the atom by the probe pulse, or a photon spectrometer[15] measures how much of the probe pulse was absorbed by the atom.

The different “scenes” are then stitched together to make the attosecond movies of electrons. These movies help provide fundamental insight, with help from sophisticated theoretical models[16], into attosecond electronic behavior.

For example, researchers have measured where the electric charge is located[17] in organic molecules at different times, on attosecond time scales. This could allow them to control electric currents on the molecular scale.

Future applications

In most scientific research, fundamental understanding of a process leads to control of the process, and such control leads to new technologies. Curiosity-driven research[18] can lead to unimaginable applications in the future, and attosecond science is likely no different.

Understanding and controlling the behavior of electrons on the attosecond scale could enable researchers to use lasers to control chemical reactions[19] that they can’t by other means. This ability could help engineer new molecules that cannot be created with existing chemical techniques.

The ability to modify electron behavior could lead to ultrafast switches. Researchers could potentially convert an electric insulator to a conductor on attosecond scales[20] to increase the speed of electronics. Electronics currently process information at the picosecond scale, or 10⁻¹² of a second.

The short wavelength of attosecond pulses, which is typically in the extreme-ultraviolet, or EUV, regime, may see applications in EUV lithography[21] in the semiconductor industry. EUV lithography uses laser light with a very short wavelength to etch tiny circuits on electronic chips.

A line of silver pipes and machinery, in a bright room, with red and blue handles.
The Linac Coherent Light Source at SLAC National Accelerator Laboratory. Department of Energy[22], CC BY[23]

In the recent past, free-electron lasers such as the Linac Coherent Light Source[24] at SLAC National Accelerator Laboratory in the United States have emerged as a source of bright X-ray laser light. These now generate pulses on the attosecond scale, opening many possibilities for research using attosecond X-rays.

Ideas to generate laser pulses on the zeptosecond (10⁻²¹ second) scale have also been proposed. Scientists could use these pulses, which are even faster than attosecond pulses, to study the motion of particles like protons within the nucleus.

With numerous research groups actively working on exciting problems in attosecond science, and with 2023’s Nobel Prize in physics[25] recognizing its importance, attosecond science has a long and bright future.

Read more

More Articles …