Biomedical implants – such as pacemakers, breast implants and orthopedic hardware like screws and plates to replace broken bones – have improved patient outcomes across a wide range of diseases. However, many implants fail[1] because the body rejects them, and they need to be removed because they no longer function and can cause pain or discomfort.

An immune reaction called the foreign body response[2] – where the body encapsulates the implant in sometimes painful scar tissue – is a key driver of implant rejection. Developing treatments that target the mechanisms driving foreign body responses could improve the design and safety of biomedical implants.

I am a biomedical engineer[3] who studies why the body forms scar tissue around medical devices. Along with my colleagues Dharshan Sivaraj[4], Jagan Padmanabhan[5] and Geoffrey Gurtner[6], we wanted to learn more about what causes foreign body responses. In our research, recently published in the journal Nature Biomedical Engineering, we identified a gene[7] that appears to drive this reaction because of the increased stress implants put on the tissues surrounding them.

Many implants need to be replaced because the immune system damages them over time.

Mechanics of implant rejection

Researchers hypothesize that foreign body responses are triggered by the chemical and material composition of the implant. Just as a person can tell the difference between touching something soft like a pillow versus something hard like a table, cells can tell when there are changes to the softness or stiffness of the tissues surrounding them as a result of an implant.

The increased mechanical stress[8] on those cells sends a signal to the immune system that there is a foreign body present. Immune cells activated by mechanical pressure respond by building a capsule made of scar tissue around the implant in an attempt to shield it off. The more severe the immune reaction, the thicker the capsule. This protects the body from getting an infection from injuries like a splinter in your finger.

All biomedical implants cause some level of foreign body response and are surrounded by at least a small capsule. Some people have very strong reactions that result in a large, thick capsule that constricts around the implant, impeding its function and causing pain. Between 10% to 30% of implants[9] need to be removed because of this scar tissue. For example, a neurostimulator could trigger the formation of a dense capsule of scar tissue that inhibits electrical stimulation[10] from properly reaching the nervous system.

To understand why the immune systems of some people build thick capsules around implants while others do not, we gathered capsule samples from 20 patients whose breast implants were removed – 10 who had severe reactions, and 10 who had mild reactions. By genetically analyzing the samples, we found that a gene called RAC2[11] was highly expressed in samples taken from patients with severe reactions but not in those with mild reactions. This gene is found only in immune cells[12], and it codes for a member of a family of proteins[13] involved in cell growth and structure.

Because this protein seemed to be linked to a lot of the downstream reactions that lead to foreign body responses, we decided to explore how RAC2 affects the formation of capsules. We found that immune cells activate RAC2 along with other proteins in response to mechanical stress[14] from implants. These proteins summon additional immune cells to the area that combine into a massive clump[15] to attack a large invader. These combined cells spit out fibrous proteins like collagen that form scar tissue.

Clinician holding a silicone breast implant
The mechanical stress that medical devices like breast implants place on surrounding tissues can trigger a foreign body response. megaflopp/iStock via Getty Images Plus[16]

To confirm RAC2’s role in foreign body responses, we artificially stimulated the mechanical signaling proteins surrounding silicone implants surgically placed in mice. This stimulation produced a severe and humanlike foreign body response in the mice. In contrast, blocking RAC2 resulted in an up to threefold reduction[17] in foreign body responses.

These findings suggest that activating mechanical stress pathways triggers immune cells with RAC2 to generate severe foreign body responses. Blocking RAC2 in immune cells may significantly reduce this reaction.

Developing new treatments

Implant failure is conventionally treated by using biocompatible materials[18] that the body can better tolerate, such as certain polymers. These don’t completely remove the risk of foreign body reactions, however.

My colleagues and I believe that treatments that target the pathways associated with RAC2 could potentially mitigate or prevent free body responses. Heading off this reaction would help improve the effectiveness and safety of medical implants.

Because only immune cells express RAC2[19], a drug designed to block only that gene would theoretically target only immune cells without affecting other cells in the body. Such a drug could also be administered via injection or even coated onto an implant to minimize side effects.

A complete understanding of the molecular mechanisms driving foreign body responses would be the final frontier in developing truly bio-integrative medical devices that could integrate with the body with no problems for the recipient’s entire life span.

Read more

Microscopy image of Vibrio vulnificus

Flesh-eating bacteria sounds like the premise of a bad horror movie, but it’s a growing – and potentially fatal – threat to people.

In September 2023, the Centers for Disease Control and Prevention issued a health advisory[1] alerting doctors and public health officials of an increase in flesh-eating bacteria cases that can cause serious wound infections.

I’m a professor[2] at the Indiana University School of Medicine, where my laboratory[3] studies microbiology and infectious disease[4]. Here’s why the CDC is so concerned about this deadly infection – and ways to avoid contracting it.

What does ‘flesh-eating’ mean?

There are several types of bacteria that can infect open wounds and cause a rare condition called necrotizing fasciitis[5]. These bacteria do not merely damage the surface of the skin – they release toxins that destroy the underlying tissue, including muscles, nerves and blood vessels. Once the bacteria reach the bloodstream, they gain ready access to additional tissues and organ systems. If left untreated, necrotizing fasciitis can be fatal, sometimes within 48 hours.

The bacterial species group A Streptococcus[6], or group A strep, is the most common culprit behind necrotizing fasciitis. But the CDC’s latest warning points to an additional suspect, a type of bacteria called Vibrio vulnificus[7]. There are only 150 to 200 cases[8] of Vibrio vulnificus in the U.S. each year, but the mortality rate is high, with 1 in 5 people succumbing to the infection.

Climate change may be driving the rise in flesh-eating bacteria infections in the U.S.

How do you catch flesh-eating bacteria?

Vibrio vulnificus primarily lives in warm seawater but can also be found in brackish water – areas where the ocean mixes with freshwater. Most infections in the U.S. occur in the warmer months, between May and October[9]. People who swim, fish or wade in these bodies of water can contract the bacteria through an open wound or sore.

Vibrio vulnificus can also get into seafood harvested from these waters, especially shellfish like oysters. Eating such foods raw or undercooked can lead to food poisoning[10], and handling them while having an open wound can provide an entry point for the bacteria to cause necrotizing fasciitis. In the U.S., Vibrio vulnificus is a leading cause of seafood-associated fatality[11].

Why are flesh-eating bacteria infections rising?

Vibrio vulnificus is found in warm coastal waters around the world. In the U.S., this includes southern Gulf Coast states. But rising ocean temperatures due to global warming are creating new habitats for this type of bacteria, which can now be found along the East Coast as far north as New York and Connecticut[12]. A recent study[13] noted that Vibrio vulnificus wound infections increased eightfold between 1988 and 2018 in the eastern U.S.

Climate change[14] is also fueling stronger hurricanes and storm surges, which have been associated with spikes in flesh-eating bacteria infection cases.

Aside from increasing water temperatures, the number of people who are most vulnerable to severe infection[15], including those with diabetes[16] and those taking medications that suppress immunity, is on the rise.

What are symptoms of necrotizing fasciitis? How is it treated?

Early symptoms[17] of an infected wound include fever, redness, intense pain or swelling at the site of injury. If you have these symptoms, seek medical attention without delay. Necrotizing fasciitis can progress quickly[18], producing ulcers, blisters, skin discoloration and pus.

Treating flesh-eating bacteria[19] is a race against time. Clinicians administer antibiotics directly into the bloodstream to kill the bacteria. In many cases, damaged tissue needs to be surgically removed to stop the rapid spread of the infection. This sometimes results in amputation[20] of affected limbs.

Researchers are concerned that an increasing number of cases are becoming impossible to treat because Vibrio vulnificus has evolved resistance to certain antibiotics[21].

Necrotizing fasciitis is rare but deadly.

How do I protect myself?

The CDC offers several recommendations to help prevent infection[22].

People who have a fresh cut, including a new piercing or tattoo, are advised to stay out of water that could be home to Vibrio vulnificus. Otherwise, the wound should be completely covered with a waterproof bandage.

People with an open wound should also avoid handling raw seafood or fish. Wounds that occur while fishing, preparing seafood or swimming should be washed immediately and thoroughly with soap and water.

Anyone can contract necrotizing fasciitis, but people with weakened immune systems are most susceptible to severe disease[23]. This includes people taking immunosuppressive medications or those who have pre-existing conditions such as liver disease, cancer, HIV or diabetes.

It is important to bear in mind that necrotizing fasciitis presently remains very rare[24]. But given its severity, it is beneficial to stay informed.

Read more

Curious Kids[1] is a series for children of all ages. If you have a question you’d like an expert to answer, send it to This email address is being protected from spambots. You need JavaScript enabled to view it.[2]. Why does a plane look and feel like it’s moving more slowly than it actually is? – Finn F., age 8, Concord, Massachusetts A passenger jet flies[3] at about 575 mph once it’s at cruising altitude. That’s nearly nine times faster than a car might typically be cruising on the highway. So why does a plane in flight look like it’s just inching across the sky?I am an aerospace educator[4] who relies on the laws of physics when teaching about aircraft. These same principles of physics help explain why looks can be deceiving when it comes to how fast an object is moving.Moving against a featureless backgroundIf you watch a plane accelerating toward takeoff, it appears to be moving very quickly. It’s not until the plane is in the air and has reached cruising altitude that it appears to be moving very slowly. That’s because there is often no independent reference point when the plane is in the sky.A reference point is a way to measure the speed of the airplane. If there are no contrails[5] or clouds surrounding it, the plane is moving against a completely uniform blue sky. This can make it very hard to perceive just how fast a plane is moving. And because the plane is far away, it takes longer for it to move across your field of vision compared to an object that is close to you. This further creates the illusion that it is moving more slowly than it actually is.These factors explain why a plane looks like it’s going more slowly than it is. But why does it feel that way, too?A passenger’s perception on the planeA plane feels like it’s traveling more slowly than it is because, just like when you look up at a plane in the sky, as a passenger on a plane, you have no independent reference point. You and the plane are moving at the same speed, which can make it difficult to perceive your rate of motion relative to the ground beneath you. This is the same reason why it can be hard to tell that you are driving quickly on a highway that is surrounded only by empty fields with no trees.
Perspective from a plane window of the plane's shadow against a brown field with the plane's white wing visible on the left side.
Watching the speed of a plane’s shadow can help you assess how quickly a plane is moving. Saul Loeb/AFP via GettyImages[6] However, there are a couple of ways you might be able to understand just how fast you are moving.Can you see the plane’s shadow[7] on the ground? It can give you perspective on how fast the plane is moving relative to the ground. If you are lucky enough to spot it, you will be amazed at how fast the plane’s shadow passes over buildings and roads. You can get a real sense of the 575 mph average speed of a cruising passenger plane. Another way to understand how fast you are moving is to note how fast thin, spotty cloud cover moves over the wing. This reference point gives you another way to “see” or perceive your speed. Remember though, that clouds aren’t typically stationary[8]; they’re just moving very slow relative to the plane.
An airplane passes over thin, spotty cloud cover.
Although it can be difficult to discern just how fast a plane is actually moving, using reference points to gain perspective can help tremendously.Has your interest in aviation been sparked? If so, there are a lot of great career opportunities in aeronautics[9].Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to This email address is being protected from spambots. You need JavaScript enabled to view it.[10]. Please tell us your name, age and the city where you live.And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.

Read more

Text saying: Uncommon Courses, from The Conversation
Uncommon Courses[1] is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching. Title of course:The Design of Coffee: An Introduction to Chemical EngineeringWhat prompted the idea for the course?In 2012, my colleague professor Tonya Kuhl and I were drinking coffee and brainstorming how to improve our senior-level laboratory course in chemical engineering. Tonya looked at her coffee and suggested, “How about we have the students reverse-engineer a Mr. Coffee drip brewer to see how it works?” A light bulb went off in my head, and I said, “Why not make a whole course about coffee to introduce lots of students to chemical engineering?” And that’s what we did. We developed The Design of Coffee as a freshman seminar for 18 students in 2013, and, since then, the course has grown to over 2,000 general education students per year at the University of California, Davis.
A student wearing a flannel shirt uses a white microscope, with a pile of coffee beans and a metal scoop sitting next to them on the table.
A student uses a microscope to look at coffee beans in The Design of Coffee lab. UC Davis What does the course explore?The course focus is hands-on experiments with roasting, brewing and tasting in our coffee lab. For example, students measure the energy they use while roasting to illustrate the law of conservation of energy[2], they measure how the pH of the coffee[3] changes after brewing to illustrate the kinetics of chemical reactions, and they measure how the total dissolved solids[4] in the brewed coffee relates to time spent brewing to illustrate the principle of mass transfer[5]. The course culminates in an engineering design contest, where the students compete to make the best-tasting coffee using the least amount of energy. It’s a classic engineering optimization problem, but one that is broadly accessible – and tasty.Why is this course relevant now?Coffee plays a huge role in culture[6], diet[7] and the U.S.[8] and global economy[9]. But historically, relatively little academic work has focused on coffee. There are entire academic programs on wine and beer at many major universities, but almost none on coffee.
A student wearing a black UC Davis sweatshirt holds a glass cup of coffee
Many students who don’t like coffee develop a taste for it over the course of the class. UC Davis The Design of Coffee helps fill a huge unmet demand because students are eager to learn about the beverage that they already enjoy. Perhaps most surprisingly, many of our students enter the course professing to hate coffee, but by the end of the course they are roasting and brewing their own coffee beans at home.What’s a critical lesson from the course?Many students are shocked to learn that black coffee can have fruity, floral or sweet flavors[10] without adding any sugar or syrups. The most important lesson from the course is that engineering is really a quantitative way to think about problem-solving. For example, if the problem to solve is “make coffee taste sweet without adding sugar,” then an engineering approach provides you with a tool set to tackle that problem quantitatively and rigorously. What materials does the course feature?Tonya and I originally self-published our lab manual, The Design of Coffee: An Engineering Approach[11], to keep prices low for our students. Now in its third edition, it has sold more than 15,000 copies and has been translated to Spanish[12], with Korean and Indonesian translations on the way.What will the course prepare students to do?Years ago, a student in our class told the campus newspaper, “I had no idea there was an engineering way to think about coffee!” Our main goal is to teach students that there is an engineering way to think about anything. The engineering skills and mindset we teach equally prepare students to design a multimillion-dollar biofuel refinery, a billion-dollar pharmaceutical production facility or, most challenging of all, a naturally sweet and delicious $3 cup of coffee. Our course is the first step in preparing students to tackle these problems, as well as new problems that no one has yet encountered.

Read more

In an exciting milestone for lunar scientists around the globe[1], India’s Chandrayaan-3 lander[2] touched down 375 miles (600 km)[3] from the south pole of the Moon[4] on Aug. 23, 2023.

In just under 14 Earth days, Chandrayaan-3 provided scientists with valuable new data and further inspiration to explore the Moon[5]. And the Indian Space Research Organization[6] has shared these initial results[7] with the world.

While the data from Chandrayaan-3’s rover[8], named Pragyan, or “wisdom” in Sanskrit, showed the lunar soil[9] contains expected elements such as iron, titanium, aluminum and calcium, it also showed an unexpected surprise – sulfur[10].

India’s lunar rover Pragyan rolls out of the lander and onto the surface.

Planetary scientists like me[11] have known that sulfur exists in lunar rocks and soils[12], but only at a very low concentration. These new measurements imply there may be a higher sulfur concentration than anticipated.

Pragyan has two instruments that analyze the elemental composition of the soil – an alpha particle X-ray spectrometer[13] and a laser-induced breakdown spectrometer[14], or LIBS[15] for short. Both of these instruments measured sulfur in the soil near the landing site.

Sulfur in soils near the Moon’s poles might help astronauts live off the land one day, making these measurements an example of science that enables exploration.

Geology of the Moon

There are two main rock types[16] on the Moon’s surface[17] – dark volcanic rock and the brighter highland rock. The brightness difference[18] between these two materials forms the familiar “man in the moon[19]” face or “rabbit picking rice” image to the naked eye.

The Moon, with the dark regions outlined in red, showing a face with two ovals for eyes and two shapes for the nose and mouth.
The dark regions of the Moon have dark volcanic soil, while the brighter regions have highland soil. Avrand6/Wikimedia Commons[20], CC BY-SA[21]

Scientists measuring lunar rock and soil compositions in labs on Earth have found that materials from the dark volcanic plains tend to have more sulfur[22] than the brighter highlands material.

Sulfur mainly comes from[23] volcanic activity. Rocks deep in the Moon contain sulfur, and when these rocks melt, the sulfur becomes part of the magma. When the melted rock nears the surface, most of the sulfur in the magma becomes a gas that is released along with water vapor and carbon dioxide.

Some of the sulfur does stay in the magma and is retained within the rock after it cools. This process explains why sulfur is primarily associated with the Moon’s dark volcanic rocks.

Chandrayaan-3’s measurements of sulfur in soils are the first to occur on the Moon. The exact amount of sulfur cannot be determined until the data calibration is completed.

The uncalibrated data[24] collected by the LIBS instrument on Pragyan suggests that the Moon’s highland soils near the poles might have a higher sulfur concentration than highland soils from the equator and possibly even higher than the dark volcanic soils.

These initial results give planetary scientists like me[25] who study the Moon new insights into how it works as a geologic system. But we’ll still have to wait and see if the fully calibrated data from the Chandrayaan-3 team confirms an elevated sulfur concentration.

Atmospheric sulfur formation

The measurement of sulfur is interesting to scientists for at least two reasons. First, these findings indicate that the highland soils at the lunar poles could have fundamentally different compositions, compared with highland soils at the lunar equatorial regions. This compositional difference likely comes from the different environmental conditions between the two regions – the poles get less direct sunlight.

Second, these results suggest that there’s somehow more sulfur in the polar regions. Sulfur concentrated here could have formed[26] from the exceedingly thin lunar atmosphere.

The polar regions of the Moon receive less direct sunlight and, as a result, experience extremely low temperatures[27] compared with the rest of the Moon. If the surface temperature falls, below -73 degrees C (-99 degrees F), then sulfur from the lunar atmosphere could collect on the surface in solid form – like frost on a window.

Sulfur at the poles could also have originated from ancient volcanic eruptions[28] occurring on the lunar surface, or from meteorites containing sulfur that struck the surface and vaporized on impact.

Lunar sulfur as a resource

For long-lasting space missions, many agencies have thought about building some sort of base on the Moon[29]. Astronauts and robots could travel from the south pole base to collect, process, store and use naturally occurring materials like sulfur on the Moon – a concept called in-situ resource utilization[30].

In-situ resource utilization means fewer trips back to Earth to get supplies and more time and energy spent exploring. Using sulfur as a resource, astronauts could build solar cells and batteries that use sulfur, mix up sulfur-based fertilizer and make sulfur-based concrete for construction[31].

Sulfur-based concrete[32] actually has several benefits compared with the concrete normally used in building projects on Earth[33].

For one, sulfur-based concrete hardens and becomes strong within hours rather than weeks, and it’s more resistant to wear[34]. It also doesn’t require water in the mixture, so astronauts could save their valuable water for drinking, crafting breathable oxygen and making rocket fuel.

The gray surface of the Moon as seen from above, with a box showing the rover's location in the center.
The Chandrayaan-3 lander, pictured as a bright white spot in the center of the box. The box is 1,108 feet (338 meters) wide. NASA/GSFC/Arizona State University

While seven missions[35] are currently operating on or around the Moon, the lunar south pole region[36] hasn’t been studied from the surface before, so Pragyan’s new measurements will help planetary scientists understand the geologic history of the Moon. It’ll also allow lunar scientists like me to ask new questions about how the Moon formed and evolved.

For now, the scientists at Indian Space Research Organization are busy processing and calibrating the data. On the lunar surface, Chandrayaan-3 is hibernating through the two-week-long lunar night, where temperatures will drop to -184 degrees F (-120 degrees C). The night will last until September 22.

There’s no guarantee that the lander component of Chandrayaan-3, called Vikram, or Pragyan will survive the extremely low temperatures, but should Pragyan awaken, scientists can expect more valuable measurements.

Read more

Each day, you leave digital traces of what you did, where you went, who you communicated with, what you bought, what you’re thinking of buying, and much more. This mass of data serves as a library of clues for personalized ads, which are sent to you by a sophisticated network – an automated marketplace[1] of advertisers, publishers and ad brokers that operates at lightning speed.

The ad networks are designed to shield your identity, but companies and governments are able to combine that information with other data, particularly phone location, to identify you and track your movements and online activity[2]. More invasive yet is spyware[3] – malicious software that a government agent, private investigator or criminal installs on someone’s phone or computer without their knowledge or consent. Spyware lets the user see the contents of the target’s device, including calls, texts, email and voicemail. Some forms of spyware can take control of a phone, including turning on its microphone and camera.

Now, according to an investigative report[4] by the Israeli newspaper Haaretz, an Israeli technology company called Insanet has developed the means of delivering spyware via online ad networks, turning some targeted ads into Trojan horses. According to the report, there’s no defense against the spyware, and the Israeli government has given Insanet approval to sell the technology.

Sneaking in unseen

Insanet’s spyware, Sherlock, is not the first spyware that can be installed on a phone without the need to trick the phone’s owner into clicking on a malicious link or downloading a malicious file. NSO[5]’s iPhone-hacking Pegasus[6], for instance, is one of the most controversial spyware tools to emerge in the past five years.

Pegasus relies on vulnerabilities in Apple’s iOS, the iPhone operating system, to infiltrate a phone undetected. Apple issued a security update[7] for the latest vulnerability[8] on Sept. 7, 2023.

Diagram showing the different entities involved in real time bidding, and the requests and responses
When you see an ad on a web page, behind the scenes an ad network has just automatically conducted an auction to decide which advertiser won the right to present their ad to you. Eric Zeng, CC BY-ND[9]

What sets Insanet’s Sherlock apart from Pegasus is its exploitation of ad networks rather than vulnerabilities in phones. A Sherlock user creates an ad campaign that narrowly focuses on the target’s demographic and location, and places a spyware-laden ad with an ad exchange. Once the ad is served to a web page that the target views, the spyware is secretly installed on the target’s phone or computer.

Although it’s too early to determine the full extent of Sherlock’s capabilities and limitations, the Haaretz report found that it can infect Windows-based computers and Android phones as well as iPhones.

Spyware vs. malware

Ad networks have been used to deliver malicious software for years, a practice dubbed malvertising[10]. In most cases, the malware is aimed at computers rather than phones, is indiscriminate, and is designed to lock a user’s data as part of a ransomware attack or steal passwords to access online accounts or organizational networks. The ad networks constantly scan for malvertising and rapidly block it when detected.

Spyware, on the other hand, tends to be aimed at phones, is targeted at specific people or narrow categories of people, and is designed to clandestinely obtain sensitive information and monitor someone’s activities. Once spyware infiltrates your system[11], it can record keystrokes, take screenshots and use various tracking mechanisms before transmitting your stolen data to the spyware’s creator.

While its actual capabilities are still under investigation, the new Sherlock spyware is at least capable of infiltration, monitoring, data capture and data transmission, according to the Haaretz report.

The new Sherlock spyware is likely to have the same frightening capabilities as the previously discovered Pegasus.

Who’s using spyware

From 2011 to 2023, at least 74 governments engaged in contracts with commercial companies to acquire spyware or digital forensics technology[12]. National governments might deploy spyware for surveillance and gathering intelligence as well as combating crime and terrorism. Law enforcement agencies might similarly use spyware as part of investigative efforts[13], especially in cases involving cybercrime, organized crime or national security threats.

Companies might use spyware to monitor employees’ computer activities[14], ostensibly to protect intellectual property, prevent data breaches or ensure compliance with company policies. Private investigators might use spyware to gather information and evidence for clients[15] on legal or personal matters. Hackers and organized crime figures might use spyware to steal information to use in fraud or extortion schemes[16].

On top of the revelation that Israeli cybersecurity firms have developed a defense-proof technology that appropriates online advertising for civilian surveillance, a key concern is that Insanet’s advanced spyware was legally authorized by the Israeli government for sale to a broader audience. This potentially puts virtually everyone at risk.

The silver lining is that Sherlock appears to be expensive to use. According to an internal company document cited in the Haaretz report, a single Sherlock infection costs a client of a company using the technology a hefty US$6.4 million.

Read more

Since ChatGPT’s release in late 2022, many news outlets have reported on the ethical threats posed by artificial intelligence. Tech pundits have issued warnings of killer robots bent on human extinction[1], while the World Economic Forum predicted that machines will take away jobs[2].

The tech sector is slashing its workforce[3] even as it invests in AI-enhanced productivity tools[4]. Writers and actors in Hollywood are on strike[5] to protect their jobs and their likenesses[6]. And scholars continue to show how these systems heighten existing biases[7] or create meaningless jobs – amid myriad other problems.

There is a better way to bring artificial intelligence into workplaces. I know, because I’ve seen it, as a sociologist[8] who works with NASA’s robotic spacecraft teams.

The scientists and engineers I study are busy exploring the surface of Mars[9] with the help of AI-equipped rovers. But their job is no science fiction fantasy. It’s an example of the power of weaving machine and human intelligence together, in service of a common goal.

An artist's rendition of the Perseverence rover, make of metal with six small wheels, a camera and a robotic arm.
Mars rovers act as an important part of NASA’s team, even while operating millions of miles away from their scientist teammates. NASA/JPL-Caltech via AP[10]

Instead of replacing humans, these robots partner with us to extend and complement human qualities. Along the way, they avoid common ethical pitfalls and chart a humane path for working with AI.

The replacement myth in AI

Stories of killer robots and job losses illustrate how a “replacement myth” dominates the way people think about AI. In this view, humans can and will be replaced by automated machines[11].

Amid the existential threat is the promise of business boons like greater efficiency[12], improved profit margins[13] and more leisure time[14].

Empirical evidence shows that automation does not cut costs. Instead, it increases inequality by cutting out low-status workers[15] and increasing the salary cost[16] for high-status workers who remain. Meanwhile, today’s productivity tools inspire employees to work more[17] for their employers, not less.

Alternatives to straight-out replacement are “mixed autonomy” systems, where people and robots work together. For example, self-driving cars must be programmed[18] to operate in traffic alongside human drivers. Autonomy is “mixed” because both humans and robots operate in the same system, and their actions influence each other.

A zoomed in shot of a white car with a bumper sticker reading 'self-driving car'
Self-driving cars, while operating without human intervention, still require training from human engineers and data collected by humans. AP Photo/Tony Avelar[19]

However, mixed autonomy is often seen as a step along the way to replacement[20]. And it can lead to systems where humans merely feed, curate or teach AI tools[21]. This saddles humans with “ghost work[22]” – mindless, piecemeal tasks that programmers hope machine learning will soon render obsolete.

Replacement raises red flags for AI ethics. Work like tagging content to train AI[23] or scrubbing Facebook posts[24] typically features traumatic tasks[25] and a poorly paid workforce[26] spread across[27] the Global South[28]. And legions of autonomous vehicle designers are obsessed with “the trolley problem[29]” – determining when or whether it is ethical to run over pedestrians.

But my research with robotic spacecraft teams at NASA[30] shows that when companies reject the replacement myth and opt for building human-robot teams instead, many of the ethical issues with AI vanish.

Extending rather than replacing

Strong human-robot teams[31] work best when they extend and augment[32] human capabilities instead of replacing them. Engineers craft machines that can do work that humans cannot. Then, they weave machine and human labor together intelligently, working toward a shared goal[33].

Often, this teamwork means sending robots to do jobs that are physically dangerous for humans. Minesweeping[34], search-and-rescue[35], spacewalks[36] and deep-sea[37] robots are all real-world examples.

Teamwork also means leveraging the combined strengths of both robotic and human senses or intelligences[38]. After all, there are many capabilities that robots have that humans do not – and vice versa.

For instance, human eyes on Mars can only see dimly lit, dusty red terrain stretching to the horizon. So engineers outfit Mars rovers with camera filters[39] to “see” wavelengths of light that humans can’t see in the infrared, returning pictures in brilliant false colors[40].

A false-color photo from the point of view of a rover standing at the cliff overlooking a brown, sandy desert-like area that looks blue in the distance.
Mars rovers capture images in near infrared to show what Martian soil is made of. NASA/JPL-Caltech/Cornell Univ./Arizona State Univ[41]

Meanwhile, the rovers’ onboard AI cannot generate scientific findings. It is only by combining colorful sensor results with expert discussion that scientists can use these robotic eyes to uncover new truths about Mars[42].

Respectful data

Another ethical challenge to AI is how data is harvested and used. Generative AI is trained on artists’ and writers’ work without their consent[43], commercial datasets are rife with bias[44], and ChatGPT “hallucinates”[45] answers to questions.

The real-world consequences of this data use in AI range from lawsuits[46] to racial profiling[47].

Robots on Mars also rely on data, processing power and machine learning techniques to do their jobs. But the data they need is visual and distance information to generate driveable pathways[48] or suggest cool new images[49].

By focusing on the world around them instead of our social worlds, these robotic systems avoid the questions around surveillance[50], bias[51] and exploitation[52] that plague today’s AI.

The ethics of care

Robots can unite the groups[53] that work with them by eliciting human emotions when integrated seamlessly. For example, seasoned soldiers mourn broken drones on the battlefield[54], and families give names and personalities to their Roombas[55].

I saw NASA engineers break down in anxious tears[56] when the rovers Spirit and Opportunity were threatened by Martian dust storms.

A hand petting a light blue, circular Roomba vacuum.
Some people feel a connection to their robot vacuums, similar to the connection NASA engineers feel to Mars rovers. nikolay100/iStock / Getty Images Plus via Getty Images[57]

Unlike anthropomorphism[58] – projecting human characteristics onto a machine – this feeling is born from a sense of care for the machine. It is developed through daily interactions, mutual accomplishments and shared responsibility.

When machines inspire a sense of care, they can underline – not undermine – the qualities that make people human.

A better AI is possible

In industries where AI could be used to replace workers, technology experts might consider how clever human-machine partnerships could enhance human capabilities instead of detracting from them.

Script-writing teams may appreciate an artificial agent that can look up dialog or cross-reference on the fly. Artists could write or curate their own algorithms to fuel creativity[59] and retain credit for their work. Bots to support software teams might improve meeting communication and find errors that emerge from compiling code.

Of course, rejecting replacement does not eliminate all ethical concerns[60] with AI. But many problems associated with human livelihood, agency and bias shift when replacement is no longer the goal.

The replacement fantasy is just one of many possible futures for AI and society. After all, no one would watch “Star Wars” if the ‘droids replaced all the protagonists. For a more ethical vision of humans’ future with AI, you can look to the human-machine teams that are already alive and well, in space and on Earth.

Read more

More Articles …