A letter was recently published in Nature on 329,000 young people identifying 74 genetic variants—spelling mistakes in single nucleotides in the six billion letter human genome—which can be used to predict nearly 20 percent of the variation in school years completed, a quantitative trait of fortitude which is correlated to general intelligence, and which you can learn about by sequencing your own genome.

Staple that to your college application.

Even before the “molecular age,” we were on guard for the slightest tips that show we are more or less valued than our peers. But there was also caution from the academics that there was actually very little we could do to leverage our biology for improvement. In 1924, the Harvard geneticist William Castle quipped that “we are scarcely as yet in a position to do more than make ourselves ridiculous in this matter. We are no more in a position to control eugenics than the tides of the ocean.”

Enter Crispr-Cas9, the first pair of tiny molecular scissors that can alter nucleotides of DNA precisely and simply. If the first draft of the human genome, published at the turn of the millennium, was like introducing a Chilton’s auto manual for human genetics, Crispr-Cas9 is the socket set.

“In my opinion, Crispr could in principle be used to boost the expected intelligence of an embryo by a considerable amount,” said James J. Lee, a researcher at University of Minnesota, one of the authors of that study. “But "in principle" does a lot of work here. One practical obstacle is that we still do not have a reliable means of determining the causal site(s) responsible” for the association. In other words, Lee cautioned me that just because a genetic variant is associated with a quantitative trait, it might only be hitchhiking with another genetic variant in the area that is actually the cause or “driver” of the effect.

“There is a vast amount of work establishing the heritability of intelligence, and the reliability of measuring it,” the cognitive scientist Steven Pinker told me. “We know the genes are in there, but because each one accounts for such a small proportion of the variance, they are hard to pinpoint. I doubt that we’ll see parents using Crispr to implant any of them in their kids, for a number of practical reasons—there are too many genes, the effect of each one is small, we don’t know which ones have negative pleiotropic effects (meaning they may contribute to a weaker effect when combined with different genetic backgrounds in different people) and the safety impediments to allowing the procedure are almost certainly too steep.”

The safety and accuracy of Crispr-Cas9 is fast improving. New proteins are being discovered and selected which make the tool more accurate and less likely to cause “off target effects,” meaning unintentional edits or disruptions to genes in a different neighborhood in a cell. Because each of our genomes is idiosyncratic, just a bit unique, we will never be entirely sure that a genome edit does not occur to a site that is highly similar to an intended target. But technical improvements mean that the technology is becoming closer to acceptance as a tool, and our knowledge of genetic targets is ever more precise.

“Doing things to our own bodies seems to be something many people are willing to do,” said Steve Gullans, a scientist who has written on genetic enhancement, or gene doping, as it is known in circles of professional sports. “Gene doping is an area of uncertainty in terms of legal and moral structures. Not sure who is the thought leader in this space. There is still too much concern about safety to do any enhancement, I believe. Once the safety issue is overcome in the minds of everyone, all bets are off.”

The bioethicists Julian Savulescu and John Harris have argued that it was not only a right but a duty to manipulate genetic code of our future children, a concept termed “procreative beneficence,” and extending the term parental neglect to “genetic neglect,” if we don’t gene engineer. The bio-ethicist Hille Haker, has by comparison, noted there is more to being a human than genetics. Others, including the University of New Mexico academic David Correia, have envisioned dystopian outcomes, suggesting the wealthy might use genetic engineering to translate power from the social sphere into the enduring code of the genome, effectively as “legacy genetics,” establishing “permanent capitalist social relations.”

But, whatever changes we code into our genomes will end up getting thrown up against different genetic backgrounds in future generations, due to random rearrangements in chromosomes, so it’s unlikely to fix any permanent relations.

Perhaps more importantly, intelligence is not a simple input-output system, as much as it is a developed ability to hold in mind and toggle between two or more opposing thoughts, as much as it is a capacity for memory. There are also questions about what kinds of intelligence we have, whether it is the techno-scientific intelligence that is most easily converted to income in the modern society, or the kind of struggling creative intelligence that leads to the creation of novels and art. To think intelligently can require a sense of insecurity for some of the most basic facts of nature. David Foster Wallace demonstrated a kind of probing essence of intelligence, and how troubling it can be, when he reflected on the questions that bothered him the most: “what is a number?”

In fact, there are no superior genes, only genes that provide advantages with a tradeoff for other disadvantages. For instance, the COMT gene encodes for the catechol-O-methyltransferase enzyme involved in degradation of dopamine in the prefrontal and temporal cortex. People with two copies of a mutation have a fourfold increase in COMT activity, while if you have less you may have better concentration, but also be more jittery. In 1995, Arnold Ludwig reported a 77 percent rate of psychiatric disorders in eminent fiction writers. Jonathan Gottschall noted that writers are 10 times, and poets 40 times, more likely to be bipolar than the general population.

Psychologists who study the connection between creativity and madness report that emotional turmoil is correlated with creativity to a point, after which too much chronic stress leads to a decline in creative capacity, a concept broadly called “inverted U.” But this also tells us that stress influences intelligence in profound ways, and that intelligence is not coded, as much as it is fought for and built. “My intelligence – whatever I call my intelligence – was assembled by that kid I was between the ages of 26 and 36 who just did not stop reading,” the author Junot Diaz told Scout Magazine. “That kid build the edifice which I currently claim as my own.”

In his 1999 paper Genetic Enhancement in Humans, Jon Gordon expressed his great doubts we will ever use genetics to improve our brains. “A useful way to appreciate the daunting task of manipulating intelligence through gene transfer is by considering the fact that a single cerebellar Purkinje cell may possess more synapses than the total number of genes in the human genome. There are tens of millions of Purkinje cells in the cerebellum, and these cells are involved in only one aspect of brain function: motor coordination. The genome only provides a blueprint for formation of the brain; the finer details of assembly and intellectual development are beyond direct genetic control and must perforce be subject to innumerable stochastic and environmental influences.”

The willingness to make alterations to our brains is inevitable if history is any indicator. In 1999, Joe Tsien and his colleagues at Princeton University shocked the world when they reported genetically engineering mice with better memories. They achieved the effect by popping an extra copy of the NR2B gene into their genomes. This gene encodes the NMDA receptor, which is used in memory formation and can affect a trait that neuroscientists call “long-term potentiation.” The press dubbed the super smart mouse pups “Doogie mice,” after the popular television show Doogie Hauser MD (then in syndication). At the time, Tsien said, if it worked in humans, everyone would want to use it, since “everyone wants to be smart.”

Daniel Keyes anticipated Tsien’s experiment decades before in his 1966 book Flowers for Algernon. The book unfolds though progress reports written by Charlie Gordon, a 32-year-old bakery worker with an IQ of 70 who grew up in the Warren State Home and Training School. Using misspelled words and broken sentences, Charlie explains to readers that scientists have told him they’ve found a means to rapidly increase his intelligence. In fact, they say, they have already engineered a mouse named Algernon to become super smart.

Charlie’s IQ eventually soars to 186. He struggles with relationships as his intellectual development outpaces his emotional development. People at his bakery job start to resent him. During his courtship with a love interest, Alice, he starts to get close to her, but senses that “Old Charlie” is near. He becomes too self-conscious to be close to her. She claims all he wants to talk about is “cultural variants, and neo-Boulean mathematics, and post-symbolic logic.” Thus, his character is divided into antithetical halves, as he struggles to reconcile his capacity for slicing insight with lagging emotional development.

Charlie decides: “Intelligence is one of the greatest human gifts. But all too often a search for knowledge drives out the search for love. This is something else I’ve discovered for myself very recently. I present it to you as a hypothesis: Intelligence without the ability to give and receive affection leads to mental and moral breakdown, to neurosis, and possibly even psychosis.”

Further reading

What Gene Therapy Needs Now is a Good Off Switch