Want to listen to this article out loud? Hear it on Slate Voice.

When, in 1996, French nun Mariannick Caniou found out she didn’t have Huntington’s disease, the lethal, degenerative genetic disorder, she fell into a depression. Throughout her life, she had been convinced that she would develop the illness that had killed her mother and grandmother. So convinced, in fact, that all her most important decisions had been based on that conviction: her decision not to marry, for example, or not to have children. She didn’t regret her decision to enter the religious life, but now she had to wonder if the specter of Huntington’s had haunted that too: “Everything I had built, my life, seemed no more substantial than air.”

In the 1980s, when doctors realized they would soon have a predictive test for Huntington’s, they did not foresee stories like Sister Caniou’s. They were deeply worried about the effect the test would have on those who took it, but the focus of their worry was, understandably, those who got an unfavorable result. They even freed up beds on psychiatric wards in anticipation of a mini tsunami of psychotic conversions. The tsunami never materialized, because those who received bad news generally coped well with it. It was the ones who got the all-clear—those like Caniou—who did not.

In the three decades since the first predictive genetic tests became available, a great deal of data has accumulated to show how people respond to knowing previously unknowable things. The rise of genetic testing has presented scientists with a 30-year experiment that has yielded some surprising insights into human behavior. The data suggest that the vast majority react in ways that at first seem counterintuitive, or at least flout what experts predicted. But as genetic testing becomes more widespread, the irrational behavior of a frightened few might start to look like the rational behavior of an enlightened majority.

Doctors’ repeatedly failed attempts to anticipate people’s responses to genetic testing is not for want of preparation. Starting in the 1980s, they conducted surveys in which they asked how people might approach the test, were one available. They noted the answers and planned accordingly. The trouble was, when the test became a reality, their respondents didn’t do what they had said they would.

Huntington’s was one of the first diseases for which a test with predictive power—meaning that it could tell a healthy person that at some point in the future they would develop the disease—became available, in 1993. (A less reliable test had existed from 1986.) The neurodegenerative disease, which usually manifests in middle age, is caused by a mutation in a single gene. (You only have to inherit one copy of the mutant gene to develop the disease.) Families had previously lived with the terrible partial knowledge that a child of an affected parent had a 50 percent chance of developing it. Now they could find out who would, and who would not.

In those preparatory surveys, roughly 70 percent of those at risk of Huntington’s said they would take a test if it existed. In fact, only around 15 percent do—a proportion that has proved stable across countries and decades. A similar pattern emerged when tests became available for other incurable brain diseases, including rare familial forms of Alzheimer’s disease and frontotemporal dementia: The vast majority of people prefer not to know.

There is a certain logic to this. Why know if there’s nothing you can do about it? And that logic is borne out by data on other diseases, which show that uptake of tests increases with the availability of effective interventions. Around two-thirds of women diagnosed with breast cancer now survive for 20 years or more—double the number 40 years ago—mainly due to improved treatment. And while only a few breast cancers are inherited, surveys indicate that 60 percent of those at risk for those forms take a test when one is available.

Aad Tibben, a psychologist at the Leiden University Medical Center in the Netherlands who has studied responses to genetic testing for 30 years, says that the 15 percent who do get tested for an incurable disease generally cite two reasons. The first and most important is the need to dispel uncertainty, and the second is a desire not to pass on the faulty gene. The first explains why carriers cope well, at least to begin with: Any result is a relief for them. Even if there is no treatment, they can make informed reproductive choices and plan for the future.

When it comes to the second reason, however, there is another puzzling discrepancy between what people say they want and what they do. Prenatal genetic testing is widely available, but the uptake by expecting couples in which one partner is a known carrier of an incurable disease is even lower than that of testing among at-risk adults. Most opt to have a child whose risk of developing that disease is the same as theirs was at birth. Why do people act in this seemingly irresponsible way with respect to their offspring?

A unique longitudinal study published in 2016 by Hanane Bouchghoul and colleagues at the Pitié-Salpêtrière Hospital in Paris unpacks that decision-making process. They interviewed 54 women—either Huntington’s carriers or wives of carriers—and found that if a couple received a favorable result in a first prenatal test, the majority had the child and stopped there. Most of those who got an unfavorable result terminated the pregnancy and tried again. If a second prenatal test produced a “good” result, they had the child and stopped. But if it produced a “bad” result and another termination, most changed strategy. Some opted for preimplantation genetic diagnosis, removing the need for termination, since only mutation-free embryos are implanted. Some abandoned the idea of having a child altogether. But nearly half, 45 percent, conceived naturally again, and this time they did not seek prenatal testing. Summarizing the findings, the geneticist on the team, Alexandra Dürr, says, “The desire to have a child overrides all else.”

Prenatal testing can be traumatic, especially when it results in a termination, and Dürr says nobody goes into it lightly. A couple can’t be forced to terminate a pregnancy, but internationally endorsed guidelines strongly advise that course of action following an unfavorable result, because otherwise the child has been subjected to a disguised predictive test. That is, their status is known, even though they themselves might choose not to know as an adult, or to know but not to tell anyone. (Predictive testing in children is generally only recommended if the disease has a childhood onset and if therapies exist.)

In a study that has yet to be published, Tibben has corroborated the French group’s conclusion. He followed 13 couples who, following counseling but prior to taking a prenatal test, agreed they would terminate in the case of an unfavorable result. None of them did so when they got that result. “That means there are 13 children alive in the Netherlands today, whom we can be 100 percent sure are [Huntington’s] carriers,” he says.

Before predictive tests, doctors hoped their advent would lead to the eradication of certain rare diseases from the gene pool within a couple of generations, rendering the search for cures obsolete. But the low uptake of testing among those at risk, combined with the even lower uptake of prenatal testing among carriers, has caused them to abandon that hope. “We now know that these diseases are here to stay,” says Tibben.

Until now, those diseases have mainly been Mendelian or single-gene disorders for which testing delivers certainty. But as testing expands, more uncertainty is creeping into the process, meaning people’s responses are likely to become less predictable still. Tests for inherited breast cancers that highlight mutations in the two BRCA genes sometimes identify variants of those genes whose clinical significance is uncertain, for example. Soon, a new kind of inconclusive test will become mainstream: whole genome sequencing, which reveals predispositions to “lifestyle” diseases such as obesity that are only partly under genetic control.

Unpredictability, therefore, may turn out to be the norm. Perhaps more than anything else, the lesson from the past three decades has been that people’s reactions to these new tools will upend our predictions and our assumptions. And it is difficult to label such behavior irrational. After all, another thing the genetic testing experiment has taught us is that certainty is fleeting—something Caniou knows only too well. She recovered from her depression and went on to know a profound sense of liberation and joy. Then, 10 years ago, she was diagnosed with breast cancer. It was treated early and she has been healthy ever since. But the irony hasn’t escaped her that when she finally knew illness, it wasn’t the one she had expected.

To paraphrase a famous saying, there are still only two certainties in life. One of them is that we will all die. Of what, we can be less sure.