A record of nootropics I have tried, with thoughts about which ones worked and did not work for me. These anecdotes should be considered only as anecdotes, and one’s efforts with nootropics a hobby to put only limited amounts of time into due to the inherent limits of drugs as a force-multiplier compared to other things like programming ; for an ironic counterpoint, I suggest the reader listen to a video of Jonathan Coulton’s “I Feel Fantastic” while reading.

Background Your mileage will vary. There are so many parameters and interactions in the brain that any of them could be the bottleneck or responsible pathway, and one could fall prey to the common U-shaped dose-response curve (eg. Yerkes-Dodson law; see also “Chemistry of the adaptive mind” & de Jongh et al 2008) which may imply that the smartest are those who benefit least but ultimately they all cash out in a very few subjective assessments like ‘energetic’ or ‘motivated’, with even apparently precise descriptions like ‘working memory’ or ‘verbal fluency’ not telling you much about what the nootropic actually did. It’s tempting to list the nootropics that worked for you and tell everyone to go use them, but that is merely generalizing from one example (and the more nootropics - or meditation styles, or self-help books, or “getting things done” systems - you try, the stronger the temptation is to evangelize). The best you can do is read all the testimonials and studies and use that to prioritize your list of nootropics to try. You don’t know in advance which ones will pay off and which will be wasted. You can’t know in advance. And wasted some must be; to coin a Umeshism: if all your experiments work, you’re just fooling yourself. (And the corollary - if someone else’s experiments always work, they’re not telling you everything.) The above are all reasons to expect that even if I do excellent single-subject design self-experiments, there will still be the old problem of “internal validity” versus “external validity”: an experiment may be wrong or erroneous or unlucky in some way (lack of internal validity) or be right but not matter to anyone else (lack of external validity). For example, alcohol makes me sad & depressed; I could run the perfect blind randomized experiment for hundreds of trials and be extremely sure that alcohol makes me less happy, but would that prove that alcohol makes everyone sad or unhappy? Of course not, and as far as I know, for a lot of people alcohol has the opposite effect. So my hypothetical alcohol experiment might have tremendous internal validity (it does prove that I am sadder after inebriating), and zero external validity (someone who has never tried alcohol learns nothing about whether they will be depressed after imbibing). Keep this in mind if you are minded to take the experiments too seriously. Somewhat ironically given the stereotypes, while I was in college I dabbled very little in nootropics, sticking to melatonin and tea. Since then I have come to find nootropics useful, and intellectually interesting: they shed light on issues in philosophy of biology & evolution, argue against naive psychological dualism and for materialism, offer cases in point on the history of technology & civilization or recent psychology theories about addiction & willpower, challenge our understanding of the validity of statistics and psychology - where they don’t offer nifty little problems in statistics and economics themselves, and are excellent fodder for the young Quantified Self movement ; modafinil itself demonstrates the little-known fact that sleep has no accepted evolutionary explanation. (The hard drugs also have more ramifications than one might expect: how can one understand the history of Southeast Asia and the Vietnamese War without reference to heroin, or more contemporaneously, how can one understand the lasting appeal of the Taliban in Afghanistan and the unpopularity & corruption of the central government without reference to the Taliban’s frequent anti-drug campaigns or the drug-funded warlords of the Northern Alliance?) Golden age Nootropics have been around a long time, but they’ve never been so prominent, easily accessed, cheap, or available in such a variety. I think there is no single factor responsible but rather existing trends progressing to the point where it’s possible to obtain much more obscurer things than before. (In particular, I don’t think it’s because there’s a sudden new surge of drugs. FDA drug approval has been decreasing over the past few decades, so this is unlikely a priori. More specifically, many of the major or hot drugs go back a long time. Bacopa goes back millennia, melatonin I don’t even know, piracetam was the ’60s, modafinil was ’70s or ’80s, ALCAR was ’80s AFAIK, Noopept & coluracetam were ’90s, and so on.) What I see as being the relevant trends are a combination of these trends: the rise of IP scofflaw countries which enable the manufacture of known drugs: India does not respect the modafinil patents, enabling the cheap generics we all use, and Chinese piracetam manufacturers don’t give a damn about the FDA ’s chilling-effect moves in the US. If there were no Indian or Chinese manufacturers, where would we get our modafinil? Buy them from pharmacies at $10 a pill or worse? It might be worthwhile, but think of the chilling effect on new users. along with the previous bit of globalization is an important factor: shipping is ridiculously cheap. The most expensive S&H in my modafinil price table is ~$15 (and most are international). To put this in perspective, I remember in the ‘90s you could easily pay $15 for domestic S&H when you ordered online - but it’s 2013, and the dollar has lost at least half its value, so in real terms, ordering from abroad may be like a quarter of what it used to cost, which makes a big difference to people dipping their toes in and contemplating a small order to try out this ’nootropics’ thing they’ve heard about. as scientific papers become much more accessible online due to Open Access, digitization by publishers, and cheap hosting for pirates, the available knowledge about nootropics increases drastically. This reduces the perceived risk by users, and enables them to educate themselves and make much more sophisticated estimates of risk and side-effects and benefits. (Take my modafinil page: in 1997, how could an average person get their hands on any of the papers available up to that point? Or get detailed info like the FDA ’s prescribing guide? Even assuming they had a computer & Internet?) the larger size of the community enables economies of scale and increases the peak sophistication possible. In a small nootropics community, there is likely to be no one knowledgeable about statistics/experimentation/biochemistry/neuroscience/whatever-you-need-for-a-particular-discussion, and the available funds increase: consider /r/Nootropics’s testing program, which is doable only because it’s a large lucrative community to sell to so the sellers are willing to donate funds for independent lab tests/Certificates of Analysis ( COA s) to be done. If there were 1000 readers rather than 23,295, how could this ever happen short of one of those 1000 readers being very altruistic? Nootropics users tend to ‘stick’. If modafinil works well for you, you’re probably going to keep using it on and off. So simply as time passes, one would expect the userbase to grow. Similarly for press coverage and forum comments and blog posts: as time passes, the total mass increases and the more likely a random person is to learn of this stuff. Defaults I do recommend a few things, like modafinil or melatonin, to many adults, albeit with misgivings about any attempt to generalize like that. (It’s also often a good idea to get powders, see the appendix.) Some of those people are helped; some have told me that they tried and the suggestion did little or nothing. I view nootropics as akin to a biological lottery; one good discovery pays for all. I forge on in the hopes of further striking gold in my particular biology. Your mileage will vary. All you have to do, all you can do is to just try it. Most of my experiences were in my 20s as a right-handed 5’11 white male weighing 190-220lbs, fitness varying over time from not-so-fit to fairly fit. In rough order of personal effectiveness weighted by costs+side-effects, I rank them as follows: Modafinil/armodafinil (less than weekly for overnight; skipping days for day use) Melatonin (daily) Caffeine+theanine (daily) Nicotine (weekly) Piracetam+choline (daily) Vitamin D (daily) Sulbutiamine (daily) (People aged <=18 shouldn’t be using any of this except harmless stuff - where one may have nutritional deficits - like fish oil & vitamin D; melatonin may be especially useful, thanks to the effects of screwed-up school schedules & electronics use on teenagers’ sleep. Changes in effects with age are real - amphetamines’ stimulant effects and modafinil’s histamine-like side-effects come to mind as examples.) Prospects for Nootropics I’ve become increasingly skeptical of nootropics in general which aren’t either stimulants or addressing special cases (like vegetarians/creatine). This is partially due to modern genomics convincing me that intelligence and most other individual differences are driven by mutation load: just a ton of small bits of sand in the gears of everything, with intelligence as particularly acutely affected by problems upstream (eg in mitochondria). For that sort of conception, it is extremely improbable to find any particular silver bullet. We also have yet to find any genetic mutations which boost intelligence by more than a trivial amount. On the other hand, personality/motivation seem somewhat more susceptible to modification because personality is in selection balance: unlike intelligence, where more is better, for every environment like the modern environment there is a certain amount of Extraversion which is optimal which is not being maximally Extraverted, say, there is a certain Conscientiousness level which is optimum to prevent slacking (but too much leads to behavioral inflexibility and sunk costs), and so on, and so there’s plenty of potential leeway for there to be something to modify motivation substantially, because evolution doesn’t ever want to modify motivation/personality too far from the population mean. So, I don’t have any master list of particularly promising candidates. There’s nothing I think could be a silver bullet if only someone would run a proper study.

Acetyl-l-carnitine (ALCAR) No effects, alone or mixed with choline+piracetam. This is pretty much as expected from reports about ALCAR (Examine.com), but I had still been hoping for energy boosts or something. (Bought from Smart Powders.)

Adderall Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see “How To Take Ritalin Correctly”). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it’s not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.) I took the first pill at 12:48 pm. 1:18, still nothing really - head is a little foggy if anything. later noticed a steady sort of mental energy lasting for hours (got a good deal of reading and programming done) until my midnight walk, when I still felt alert, and had trouble sleeping. (Zeo reported a ZQ of 100, but a full 18 minutes awake, 2 or 3 times the usual amount.) At this point, I began thinking about what I was doing. Black-market Adderall is fairly expensive; $4-10 a pill vs prescription prices which run more like $60 for 120 20mg pills. It would be a bad idea to become a fan without being quite sure that it is delivering bang for the buck. Now, why the piracetam mix as the placebo as opposed to my other available powder, creatine powder, which has much smaller mental effects? Because the question for me is not whether the Adderall works (I am quite sure that the amphetamines have effects!) but whether it works better for me than my cheap legal standbys (piracetam & caffeine)? (Does Adderall have marginal advantage for me?) Hence, I want to know whether Adderall is better than my piracetam mix. People frequently underestimate the power of placebo effects, so it’s worth testing. (Unfortunately, it seems that there is experimental evidence that people on Adderall know they are on Adderall and also believe they have improved performance, when they do not . So the blind testing does not buy me as much as it could.) Adderall blind testing Blinding yourself But how to blind myself? I used my pill maker to make 9 OO pills of piracetam mix, and then 9 OO pills of piracetam mix+the Adderall, then I put them in a baggy. The idea is that I can blind myself as to what pill I am taking that day since at the end of the day, I can just look in the baggy and see whether a placebo or Adderall pill is missing: the big capsules are transparent so I can see whether there is a crushed-up blue Adderall in the end or not. If there are fewer Adderall than placebo, I took an Adderall, and vice-versa. Now, since I am checking at the end of each day, I also need to remove or add the opposite pill to maintain the ratio and make it easy to check the next day; more importantly I need to replace or remove a pill, because otherwise the odds will be skewed and I will know how they are skewed. (Imagine I started with 4 Adderalls and 4 placebos, and then 3 days in a row I draw placebos but I don’t add or remove any pills; the next day, because most of the placebos have been used up, there’s only a small chance I will get a placebo…) This is only one of many ways to blind myself; for example, instead of using one bag, one could use two bags and instead blindly pick a bag to take a pill out of, balancing contents as before. (See also my Vitamin D and day modafinil trials.) Results Began double-blind trial. Today I took one pill blindly at 1:53 PM. at the end of the day when I have written down my impressions and guess whether it was one of the Adderall pills, then I can look in the baggy and count and see whether it was. there are many other procedures one can take to blind oneself (have an accomplice mix up a sequence of pills and record what the sequence was; don’t count & see but blindly take a photograph of the pill each day, etc.) Around 3, I begin to wonder whether it was Adderall because I am arguing more than usual on IRC and my heart rate seems a bit high just sitting down. 6 PM: I’ve started to think it was a placebo. My heart rate is back to normal, I am having difficulty concentrating on long text, and my appetite has shown up for dinner (although I didn’t have lunch, I don’t think I had lunch yesterday and yesterday the hunger didn’t show up until past 7). Productivity wise, it has been a normal day. All in all, I’m not too sure, but I think I’d guess it was Adderall with 40% confidence (another way of saying ‘placebo with 60% confidence’). When I go to examine the baggie at 8:20 PM, I find out… it was an Adderall pill after all. Oh dear. One little strike against Adderall that I guessed wrong. It may be that the problem is that I am intrinsically a little worse today (normal variation? come down from Adderall?). So, a change to the protocol. I will take a pill every other day - a day to washout and reacclimate to ‘baseline’, and then an experimental day. In subsequent entries, assume there was either a at least one intervening break or placebo day. Took random pill at 2:02 PM. Went to lunch half an hour afterwards, talked until 4 - more outgoing than my usual self. I continued to be pretty energetic despite not taking my caffeine+piracetam pills, and though it’s now 12:30 AM and I listened to TAM YouTube videos all day while reading, I feel pretty energetic and am reviewing Mnemosyne cards. I am pretty confident the pill today was Adderall. Hard to believe placebo effect could do this much for this long or that normal variation would account for this. I’d say 90% confidence it was Adderall. I do some more Mnemosyne, typing practice, and reading in a Montaigne book, and finally get tired and go to bed around 1:30 AM or so. I check the baggie when I wake up the next morning, and sure enough, it had been an Adderall pill. That makes me 1 for 2. Took pill 1:27 PM. At 2 my hunger gets the best of me (despite my usual tea drinking and caffeine+piracetam pills) and I eat a large lunch. This makes me suspicious it was placebo - on the previous days I had noted a considerable appetite-suppressant effect. 5:25 PM: I don’t feel unusually tired, but nothing special about my productivity. 8 PM; no longer so sure. Read and excerpted a fair bit of research I had been putting off since the morning. After putting away all the laundry at 10, still feeling active, I check. It was Adderall. I can’t claim this one either way. By 9 or 10 I had begun to wonder whether it was really Adderall, but I didn’t feel confident saying it was; my feeling could be fairly described as 50%. Break; this day/night was for trying armodafinil, pill #1 Took pill around 6 PM; I had a very long drive to and from an airport ahead of me, ideal for Adderall. In case it was Adderall, I chewed up the pill - by making it absorb faster, more of the effect would be there when I needed it, during driving, and not lingering in my system past midnight. Was it? I didn’t notice any change in my pulse, I yawned several times on the way back, my conversation was not more voluminous than usual. I did stay up later than usual, but that’s fully explained by walking to get ice cream. All in all, my best guess was that the pill was placebo, and I feel fairly confident but not hugely confident that it was placebo. I’d give it ~70%. And checking the next morning… I was right! Finally. Took pill 12:11 PM. I am not certain. While I do get some things accomplished (a fair amount of work on the Silk Road article and its submission to places), I also have some difficulty reading through a fiction book (Sum) and I seem kind of twitchy and constantly shifting windows. I am weakly inclined to think this is Adderall (say, 60%). It’s not my normal feeling. Next morning - it was Adderall. Week-long break - armodafinil #2 experiment, volunteer work Took pill #6 at 12:35 PM. Hard to be sure. I ultimately decided that it was Adderall because I didn’t have as much trouble as I normally would in focusing on reading and then finishing my novel (Surface Detail) despite my family watching a movie, though I didn’t notice any lack of appetite. Call this one 60-70% Adderall. I check the next evening and it was Adderall. Took pill at 10:50 AM. At 12:30 I watch the new Captain America , and come out as energetic as I went in and was not hungry for snacks at all during it; at this point, I’m pretty confident (70%) that it was Adderall. At 5 I check, and it was. Overall, pretty normal day, save for leading up to the third armodafinil trial. Just 3 Adderall left; took random pill at 12:30. Hopefully I can get a lot of formatting done on hafu. I do manage to do a lot of work on it and my appetite seems minor up until 8 PM, although if not for those two observations; perhaps 60% that it was Adderall. I check the next morning, and it was not. Skipping break day since it was placebo yesterday and I’d like to wind up the Adderall trials. Pill at 12:24 PM. I get very hungry around 3 PM, and it’s an unproductive day even considering how much stress and aggravation and the 3 hours a failed Debian unstable upgrade cost me. I feel quite sure (75%) it was placebo. It was. Took pill at 11:27 AM. Moderately productive. Not entirely sure. 50% either way. (It’s placebo.) Pill at 12:40 PM. I spend entirely too much time arguing matters related to a LW post and on IRC, but I manage to channel it into writing a new mini-essay on my past intellectual sins. This sort of thing seems like Adderall behavior, and I don’t get hungry until much later. All in all, I feel easily 75% sure it’s Adderall; and it was. 12:18 PM. (There are/were just 2 Adderall left now.) I manage to spend almost the entire afternoon single-mindedly concentrating on transcribing two parts of a 1996 Toshio Okada interview (it was very long, and the formatting more challenging than expected), which is strong evidence for Adderall, although I did feel fairly hungry while doing it. I don’t go to bed until midnight and & sleep very poorly - despite taking triple my usual melatonin! Inasmuch as I’m already fairly sure that Adderall damages my sleep, this makes me even more confident (>80%). When I grumpily crawl out of bed and check: it’s Adderall. (One Adderall left.) 10:50 AM. Normal appetite; I try to read through Edward Luttwak’s The Grand Strategy of the Byzantine Empire, slow going. Overall, I guess it was placebo with 70% - I notice nothing I associate with Adderall. I check it at midnight, and it was placebo. 11:30 AM. By 2:30 PM, my hunger is quite strong and I don’t feel especially focused - it’s difficult to get through the tab-explosion of the morning, although one particularly stupid poster on the DNB ML makes me feel irritated like I might on Adderall. I initially figure the probability at perhaps 60% for Adderall, but when I wake up at 2 AM and am completely unable to get back to sleep, eventually racking up a Zeo score of 73 (compared to the usual 100s), there’s no doubt in my mind (95%) that the pill was Adderall. And it was the last Adderall pill indeed. My predictions were substantially better than random chance , so my default belief - that Adderall does affect me and (mostly) for the better - is borne out. I usually sleep very well and 3 separate incidents of horrible sleep in a few weeks seems rather unlikely (though I didn’t keep track of dates carefully enough to link the Zeo data with the Adderall data). Between the price and the sleep disturbances, I don’t think Adderall is personally worthwhile. Value of Information (VoI) See also the discussion as applied to ordering modafinil & evaluating sleep experiments. The amphetamine mix branded “Adderall” is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one’s body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let’s say, and not ordinary aimless usage), that’s a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn’t do any formal statistics for it, much less a power calculation, so let’s try to be conservative by penalizing the information quality heavily and assume it had 25%. So 200−0ln1.05⋅0.50⋅0.25=512! The experiment probably used up no more than an hour or two total. Vaniver argues that since I start off not intending to continue Adderall, the analysis actually needs to be different: In 3, you’re considering adding a new supplement, not stopping a supplement you already use. The “I don’t try Adderall” case has value $0, the “Adderall fails” case is worth -$40 (assuming you only bought 10 pills, and this number should be increased by your analysis time and a weighted cost for potential permanent side effects), and the “Adderall succeeds” case is worth $X-40-4099, where $X is the discounted lifetime value of the increased productivity due to Adderall, minus any discounted long-term side effect costs. If you estimate Adderall will work with p=.5, then you should try out Adderall if you estimate that 0.5⋅(X−4179)>0 → $X>4179$. (Adderall working or not isn’t binary, and so you might be more comfortable breaking down the various “how effective Adderall is” cases when eliciting X, by coming up with different levels it could work at, their values, and then using a weighted sum to get X. This can also give you a better target with your experiment- “this needs to show a benefit of at least Y from Adderall for it to be worth the cost, and I’ve designed it so it has a reasonable chance of showing that.”) One thing to notice is that the default case matters a lot. This asymmetry is because you switch decisions in different possible worlds - when you would take Adderall but stop you’re in the world where Adderall doesn’t work, and when you wouldn’t take Adderall but do you’re in the world where Adderall does work (in the perfect information case, at least). One of the ways you can visualize this is that you don’t penalize tests for giving you true negative information, and you reward them for giving you true positive information. (This might be worth a post by itself, and is very Litany of Gendlin.) Either way, this example demonstrates that anything you are doing expensively is worth testing extensively.

Adrafinil The adrafinil/Olmifon (bought simultaneously with the hydergine from Anti-Aging Systems, now Antiaging Central) was a disappointment. Almost as expensive as actual modafinil, with the risk of liver problems, but did nothing whatsoever that I noticed. It is supposed to be subtler than modafinil, but that’s a little ridiculous. The advantage of adrafinil is that it is legal & over-the-counter in the USA, so one removes the small legal risk of ordering & possessing modafinil without a prescription, and the retailers may be more reliable because they are not operating in a niche of dubious legality. Based on comments from others, the liver problem may have been overblown, and modafinil vendors post-2012 seem to have become more unstable, so I may give adrafinil (from another source than Antiaging Central) a shot when my modafinil/armodafinil run out.

Aniracetam Very expensive; I noticed minimal improvements when combined with sulbutiamine & piracetam+choline. Definitely not worthwhile for me.

Bacopa monnieri Bacopa is a supplement herb often used for memory or stress adaptation. Its chronic effects reportedly take many weeks to manifest, with no important acute effects. Out of curiosity, I bought 2 bottles of Bacognize Bacopa pills and ran a non-randomized non-blinded ABABA quasi-self-experiment from June 2014 to September 2015, measuring effects on my memory performance, sleep, and daily self-ratings of mood/productivity. Because of the very slow onset, small effective sample size, definite temporal trends probably unrelated to Bacopa, and noise in the variables, the results were as expected, ambiguous, and do not strongly support any correlation between Bacopa and memory/sleep/self-rating (+/-/- respectively). Main article: Bacopa.

Beta-phenylethylamine (PEA) Based on this H+ article/advertisement, I gave a PEA supplement a try. Noticed nothing. Critical commentators pointed out that PEA was notoriously degraded by the digestive system and has essentially no effect on its own , though Neurvana’s ‘pro’ supplement claimed to avoid that. I guess it doesn’t. Discussions of PEA mention that it’s almost useless without a MAOI to pave the way; hence, when I decided to get deprenyl and noticed that deprenyl is a MAOI, I decided to also give PEA a second chance in conjunction with deprenyl. Unfortunately, in part due to my own shenanigans, Nubrain canceled the deprenyl order and so I have 20g of PEA sitting around. Well, it’ll keep until such time as I do get a MAOI.

Choline/DMAE Does little alone, but absolutely necessary in conjunction with piracetam. (Bought from Smart Powders.) When turning my 3kg of piracetam into pills, I decided to avoid the fishy-smelling choline and go with 500g of DMAE (Examine.com); it seemed to work well when I used it before with oxiracetam & piracetam, since I had no ‘piracetam headaches’, and be considerably less bulky. In the future, I might try Alpha-GPC instead of the regular cholines; that supposedly has better bio-availability.

Coconut oil Coconut oil was recommended by Pontus Granström on the Dual N-Back mailing list for boosting energy & mental clarity. It is fairly cheap (~$13 for 30 ounces) and tastes surprisingly good; it has a very bad reputation in some parts, but seems to be in the middle of a rehabilitation. Seth Robert’s Buttermind experiment found no mental benefits to coconut oil (and benefits to eating butter), but I wonder. The first night I was eating some coconut oil, I did my n-backing past 11 PM; normally that damages my scores, but instead I got 66/66/75/88/77% (▁▁▂▇▃) on D4B and did not feel mentally exhausted by the end. The next day, I performed well on the Cambridge mental rotations test. An anecdote, of course, and it may be due to the vitamin D I simultaneously started. Or another day, I was slumped under apathy after a promising start to the day; a dose of fish & coconut oil, and 1 last vitamin D, and I was back to feeling chipper and optimist. Unfortunately I haven’t been testing out coconut oil & vitamin D separately, so who knows which is to thank. But still interesting. After several weeks of regularly consuming coconut oil and using up the first jar of 15oz, I’m no longer particularly convinced it was doing anything. (I’ve found it’s good for frying eggs, though.) Several days after using up the second jar, I notice no real difference in mood or energy or DNB scores.

Coluracetam One of the most obscure -racetams around, coluracetam (Smarter Nootropics, Ceretropic, Isochroma) acts in a different way from piracetam - piracetam apparently attacks the breakdown of acetylcholine while coluracetam instead increases how much choline can be turned into useful acetylcholine. This apparently is a unique mechanism. A crazy Longecity user, ScienceGuy ponied up $16,000 (!) for a custom synthesis of 500g; he was experimenting with 10-80mg sublingual doses (the ranges in the original anti-depressive trials) and reported a laundry list of effects (as does Isochroma): primarily that it was anxiolytic and increased work stamina. Unfortunately for my stack, he claims it combines poorly with piracetam. He offered free 2g samples for regulars to test his claims. I asked & received some. Experiment design is complicated by his lack of use of any kind of objective tests, but 3 metrics seem worthwhile: dual n-back: testing his claims about concentration, increased energy & stamina, and increased alertness & lucidity. daily Mnemosyne flashcard scores: testing his claim about short & medium-term memory, viz. I have personally found that with respect to the NOOTROPIC effect(s) of all the RACETAMS, whilst I have experienced improvements in concentration and working capacity / productivity, I have never experienced a noticeable ongoing improvement in memory. COLURACETAM is the only RACETAM that I have taken wherein I noticed an improvement in MEMORY, both with regards to SHORT-TERM and MEDIUM-TERM MEMORY. To put matters into perspective, the memory improvement has been mild, yet still significant; whereas I have experienced no such improvement at all with the other RACETAMS. daily mood/productivity log (1-5): for the anxiolytic and working claims. (In all 3, higher = better, so a multivariate result is easily interpreted..) He recommends a 10mg dose, but sublingually. He mentions “COLURACETAM’s taste is more akin to that of PRAMIRACETAM than OXIRACETAM, in that it tastes absolutely vile” (not a surprise), so it is impossible to double-blind a sublingual administration - even if I knew of an inactive equally-vile-tasting substitute, I’m not sure I would subject myself to it. To compensate for ingesting the coluracetam, it would make sense to double the dose to 20mg (turning the 2g into <100 doses). Whether the effects persist over multiple days is not clear; I’ll assume it does not until someone says it does, since this makes things much easier.

Creatine Creatine (Examine.com) monohydrate was another early essay of mine - cheap (because it’s so popular with the bodybuilder types), and with a very good safety record. I bought some from Bulk Powders and combined it with my then-current regimen (piracetam+choline). I’m not a bodybuilder, but my interest was sparked by several studies, some showing benefits and others not - usually in subpopulations like vegetarians or old people. As I am not any of the latter, I didn’t really expect a mental benefit. As it happens, I observed nothing. What surprised me was something I had forgotten about: its physical benefits. My performance in Taekwondo classes suddenly improved - specifically, my endurance increased substantially. Before, classes had left me nearly prostrate at the end, but after, I was weary yet fairly alert and happy. (I have done Taekwondo since I was 7, and I have a pretty good sense of what is and is not normal performance for my body. This was not anything as simple as failing to notice increasing fitness or something.) This was driven home to me one day when in a flurry before class, I prepared my customary tea with piracetam, choline & creatine; by the middle of the class, I was feeling faint & tired, had to take a break, and suddenly, thunderstruck, realized that I had absentmindedly forgot to actually drink it! This made me a believer. After I ran out of creatine, I noticed the increased difficulty, and resolved to buy it again at some point; many months later, there was a Smart Powders sale so bought it in my batch order, $12 for 1000g. As before, it made Taekwondo classes a bit easier. I paid closer attention this second time around and noticed that as one would expect, it only helped with muscular fatigue and did nothing for my aerobic issues. (I hate aerobic exercise, so it’s always been a weak point.) I eventually capped it as part of a sulbutiamine-DMAE-creatine-theanine mix. This ran out 2013-05-01. In March 2014, I spent $19 for 1kg of micronized creatine monohydrate to resume creatine use and also to use it as a placebo in a honey-sleep experiment testing Seth Roberts’s claim that a few grams of honey before bedtime would improve sleep quality: my usual flour placebo being unusable because the mechanism might be through simple sugars, which flour would digest into. (I did not do the experiment: it was going to be a fair amount of messy work capping the honey and creatine, and I didn’t believe Roberts’s claims for a second - my only reason to do it would be to prove the claim wrong but he’d just ignore me and no one else cares.) I didn’t try measuring out exact doses but just put a spoonful in my tea each morning (creatine is tasteless). The 1kg lasted from 25 March to 18 September or 178 days, so ~5.6g & $0.11 per day. Ryan Carey tracked creatine consumption vs some tests with ambiguous results.

Cytisine Cytisine is an obscure drug known, if at all, for use in anti-smoking treatment. Cytisine is not known as a stimulant and I’m not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it’s odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try. My first dose on 2017-03-01, at the recommended 0.5ml/1.5mg was miserable, as I felt like I had the flu and had to nap for several hours before I felt well again, requiring 6h to return to normal; after waiting a month, I tried again, but after a week of daily dosing in May, I noticed no benefits; I tried increasing to 3x1.5mg but this immediately caused another afternoon crash/nap on 18 May. So I scrapped my cytisine. Oh well.

Huperzine-A The chemical Huperzine-A (Examine.com) is extracted from a moss. It is an acetylcholinesterase inhibitor (instead of forcing out more acetylcholine like the -racetams, it prevents acetylcholine from breaking down). My experience report: One for the ‘null hypothesis’ files - Huperzine-A did nothing for me. Unlike piracetam or fish oil, after a full bottle (Source Naturals, 120 pills at 200μg each), I noticed no side-effects, no mental improvements of any kind, and no changes in DNB scores from straight Huperzine-A. Possible confounding factors: youth: I am considerably younger than the other poster who uses HA

I only tested a few days with choline+H-A (but I didn’t notice anything beyond the choline there).

counterfeiting? Source Naturals is supposed to be trustworthy, but rare herbal products are most susceptible to fake goods. It’s really too bad. H-A is cheap, compact, doesn’t taste at all, and in general is much easier to take than fish oil (and much easier to swallow than piracetam or choline!). But if it doesn’t deliver, it doesn’t deliver.

Hydergine Hydergine (FDA adverse events) was another disappointment (like the adrafinil, purchased from Anti-Aging Systems/Antiaging Central). I noticed little to nothing that couldn’t be normal daily variation.

Iodine As discussed in my iodine essay (FDA adverse events), iodine is a powerful health intervention as it eliminates cretinism and improves average IQ by a shocking magnitude. If this effect were possible for non-fetuses in general, it would be the best nootropic ever discovered, and so I looked at it very closely. Unfortunately, after going through ~20 experiments looking for ones which intervened with iodine post-birth and took measures of cognitive function, my meta-analysis concludes that: the effect is small and driven mostly by one outlier study. Once you are born, it’s too late. But the results could be wrong, and iodine might be cheap enough to take anyway, or take for non-IQ reasons. (This possibility was further weakened for me by an August 2013 blood test of TSH which put me at 3.71 uIU/ml, comfortably within the reference range of 0.27-4.20.) Power analysis Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything: library (pwr) pwr.t.test ( power= 0.75 , sig.level= 0.05 , n= 22 ) # Two-sample t test power calculation # # n = 22 # d = 0.8130347 Fitzgerald 2012 is better, and gives a number of useful details on her adult experiment: Participants (n=205) [young adults aged 18-30 years] were recruited between July 2010 and January 2011, and were randomized to receive either a daily 150 µg (0.15mg) iodine supplement or daily placebo supplement for 32 weeks…After adjusting for baseline cognitive test score, examiner, age, sex, income, and ethnicity, iodine supplementation did not significantly predict 32 week cognitive test scores for Block Design (p=0.385), Digit Span Backward (p=0.474), Matrix Reasoning (p=0.885), Symbol Search (p=0.844), Visual Puzzles (p=0.675), Coding (p=0.858), and Letter-Number Sequencing (p=0.408). Full text isn’t available although some of the p-values suggest that there might be differences which didn’t reach significance, so to estimate an upper bound on what sort of effect-size we’re dealing with: pwr.t.test ( type= "two.sample" , power= 0.75 , alternative= "greater" , n= 102 ) # Two-sample t test power calculation # # n = 102 # d = 0.325867 This is a much tighter upper bound than Southon et al 1994 gave us, and also kind of discouraging: remember, the smaller the effect size, the more data you will need to see it, and data is always expensive. If I were to try to do any experiment, how many pairs would I need if we optimistically assume that d=0.32? pwr.t.test ( type= "paired" , d= 0.325867 , power= 0.75 , alternative= "greater" ) # Paired t test power calculation # # n = 52.03677 We’d want 53 pairs, but Fitzgerald 2012’s experimental design called for 32 weeks of supplementation for a single pair of before-after tests - so that’d be 1664 weeks or ~54 months or ~4.5 years! We can try to adjust it downwards with shorter blocks allowing more frequent testing; but problematically, iodine is stored in the thyroid and can apparently linger elsewhere - many of the cited studies used intramuscular injections of iodized oil (as opposed to iodized salt or kelp supplements) because this ensured an adequate supply for months or years with no further compliance by the subjects. If the effects are that long-lasting, it may be worthless to try shorter blocks than ~32 weeks. We’ve looked at estimating based on individual studies. But we aggregated them into a meta-analysis more powerful than any of them, and it gave us a final estimate of d=~0.1. What does that imply? pwr.t.test ( type= "paired" , d= 0.1 , power= 0.75 , alternative= "greater" ) # Paired t test power calculation # # n = 539.2906 540 pairs of tests or 1080 blocks… This game is not worth the candle! VoI For background on “value of information” calculations, see the Adderall calculation. Cost: This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38⋅7.25=275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so 365.25120⋅9⋅5=137. The time plus the gel capsules plus the potassium iodide is $567. Benefit: Some work has been done on estimating the value of IQ, both as net benefits to the possessor (including all zero-sum or negative-sum aspects) and as net positive externalities to the rest of society. The estimates are substantial: in the thousands of dollars per IQ point. But since increasing IQ post-childhood is almost impossible barring disease or similar deficits, and even increasing childhood IQs is very challenging, much of these estimates are merely correlations or regressions, and the experimental childhood estimates must be weakened considerably for any adult - since so much time and so many opportunities have been lost. A wild guess: $1000 net present value per IQ point. The range for severely deficient children was 10-15 points, so any normal (somewhat deficient) adult gain must be much smaller and consistent with Fitzgerald 2012’s ceiling on possible effect sizes (small). Let’s make another wild guess at 2 IQ points, for $2000. Expectation: What is my prior expectation that iodine will do anything? A good way to break this question down is the following series of necessary steps: how much do I believe I am iodine deficient? (If I am not deficient, then supplementation ought to have no effect.) The previous material on modern trends suggests a prior >25%, and higher than that if I were female. However, I was raised on a low-salt diet because my father has high blood pressure, and while I like seafood, I doubt I eat it more often than weekly. I suspect I am somewhat iodine-deficient, although I don’t believe as confidently as I did that I had a vitamin D deficiency. Let’s call this one 75%.

If deficient, how likely would it help at my age? (The effect may exist only at limited age ranges - like height, once you’re done growing, few interventions short of bone surgery will make one taller or shorter.) So this is one of the key assumptions: can we extend the benefits in deficient children to somewhat deficient adults? Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20 of being iodine! I may be unduly optimistic if I give this as much as 10%.

If it would help at my age, how likely do I think my supplementation would hit the sweet spot and not under or overshoot? (We already saw that too much iodine could poison both adults and children, and of course too little does not help much - iodine would seem to follow a U-curve like most supplements.) The listed doses at iherb.com often are ridiculously large: 10-50mg! These are doses that seems to actually be dangerous for long-term consumption, and I believe these are doses that are designed to completely suffocate the thyroid gland and prevent it from absorbing any more iodine - which is useful as a short-term radioactive fallout prophylactic, but quite useless from a supplementation standpoint. Fortunately, there are available doses at Fitzgerald 2012’s exact dose, which is roughly the daily RDA: 0.15mg. Even the contrarian materials seem to focus on a modest doubling or tripling of the existing RDA, so the range seems relatively narrow. I’m fairly confident I won’t overshoot if I go with 0.15-1mg, so let’s call this 90%. Conclusion: 75% times 10% times 90% is 6.3%. EV of taking iodine Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years’ worth or ~$10 a year or a NPV cost of $205 (10ln1.05) versus a 20% chance of $2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine. Value of Information Finally, what is the value of information of conducting the experiment? With an estimated power of 75%, and my own skeptical prior of 20% that there’s any effect worth caring about, and a potential benefit of $2000, that’s 0.75⋅0.063⋅2000=95. We must weigh $95 against the estimated experimentation cost of $567. Since the information is worth less than the experiment costs, I should not do it. But notice that most of the cost imbalance is coming from the estimate of the benefit of IQ - if it quadrupled to a defensible $8000, that would be close to the experiment cost! So in a way, what this VoI calculation tells us is that what is most valuable right now is not that iodine might possibly increase IQ, but getting a better grip on how much any IQ intervention is worth. So the overall picture is that I should: start taking a moderate dose of iodine at some point look into cheap tests for iodine deficiency One self-test suggested online involves dripping iodine onto one’s skin and seeing how long it takes to be absorbed. This doesn’t seem terrible, but according to Derry and Abraham, it is unreliable.

Home urine test kits of unknown accuracy are available online (Google “iodine urine test kit”) but run $70-$100+ eg. Hakala Research. try to think of cheaper experiments I could run for benefits from iodine Iodine eye color changes? A poster or two on Longecity claimed that iodine supplementation had changed their eye color, suggesting a connection to the yellow-reddish element bromine - bromides being displaced by their chemical cousin, iodine. I was skeptical this was a real effect since I don’t know why visible amounts of either iodine or bromine would be in the eye, and the photographs produced were less than convincing. But it’s an easy thing to test, so why not? For 2 weeks, upon awakening I took close-up photographs of my right eye. Then I ordered two jars of Life-Extension Sea-Iodine (60x1mg) (1mg being an apparently safe dose), and when it arrived on 2012-09-10, I stopped the photography and began taking 1 iodine pill every other day. I noticed no ill effects (or benefits) after a few weeks and upped the dose to 1 pill daily. After the first jar of 60 pills was used up, I switched to the second jar, and began photography as before for 2 weeks. The photographs were uploaded, cropped by hand in Gimp, and shrunk to more reasonable dimensions; both sets are available in a Zip file. Upon examining the photographs, I noticed no difference in eye color, but it seems that my move had changed the ambient lighting in the morning and so there was a clear difference between the two sets of photographs! The ‘before’ photographs had brighter lighting than the ‘after’ photographs. Regardless, I decided to run a small survey on QuickSurveys/Toluna to confirm my diagnosis of no-change; the survey was 11 forced-choice pairs of photographs (before-after), with the instructions as follows: Estimated time: <1 min. Below is 11 pairs of close-up eye photographs,. In half the photos, the eye color of the iris may or may not have been artificially lightened; as a challenge, the photos are taken under varying light conditions! In each pair, try to pick the photo with a lightened iris eye color if any. (Do not judge simply on overall lighting.) (I reasoned that this description is not actually deceptive: taking pills is indeed “artificial”, as I would not ‘naturally’ consume so much iodine or seaweed extract, and I didn’t know for sure that my eyes hadn’t changed color so the correct description is indeed “may or may not have”.) I posted a link to the survey on my Google+ account, and inserted the link at the top of all gwern.net pages; 51 people completed all 11 binary choices (most of them coming from North America & Europe), which seems adequate since the 11 questions are all asking the same question, and 561 responses to one question is quite a few. A few different statistical tests seem applicable: a chi-squared test whether there’s a difference between all the answers, a two-sample test on the averages, and most meaningfully, summing up the responses as a single pair of numbers and doing a binomial test: before <- c ( 27 , 31 , 18 , 26 , 22 , 29 , 20 , 13 , 18 , 31 , 27 ) # I split the 11 questions into how many picked, after <- c ( 24 , 20 , 33 , 25 , 29 , 22 , 31 , 38 , 33 , 20 , 24 ) # for it, before vs after summary (before); summary (after) # Min. 1st Qu. Median Mean 3rd Qu. Max. # 13.0 19.0 26.0 23.8 28.0 31.0 # Min. 1st Qu. Median Mean 3rd Qu. Max. # 20.0 23.0 25.0 27.2 32.0 38.0 chisq.test (before, after, simulate.p.value= TRUE ) # Pearson`s Chi-squared test with simulated p-value # # data: before and after # X-squared = 77, df = NA, p-value = 0.000135 wilcox.test (before, after) # Wilcoxon rank sum test with continuity correction # # data: before and after # W = 43, p-value = 0.2624 # alternative hypothesis: true location shift is not equal to 0 binom.test ( c ( sum (before), sum (after))) # Exact binomial test # # data: c(sum(before), sum(after)) # number of successes = 262, number of trials = 561, p-value = 0.1285 # alternative hypothesis: true probability of success is not equal to 0.5 # 95% confidence interval: # 0.4251 0.5093 # sample estimates: # probability of success # 0.467 So the chi-squared believes there is a statistically-significant difference, the two-sample test disagrees, and the binomial also disagrees. Since I regarded it as a dubious theory, can’t see a difference, and the binomial seems like the most appropriate test, I conclude that several months of 1mg iodine did not change my eye color. (As a final test, when I posted the results on the Longecity forum where people were claiming the eye color change, I swapped the labels on the photos to see if anyone would claim something along the lines “when I look at the photos, I can see a difference!”. I thought someone might do that, which would be a damning demonstration of their biases & wishful thinking, but no one did.)

Kratom Kratom (Erowid, Reddit) is a tree leaf from Southeast Asia; it’s addictive to some degree (like caffeine and nicotine), and so it is regulated/banned in Thailand, Malaysia, Myanmar, and Bhutan among others - but not the USA. (One might think that kratom’s common use there indicates how very addictive it must be, except it literally grows on trees so it can’t be too hard to get.) Kratom is not particularly well-studied (and what has been studied is not necessarily relevant - I’m not addicted to any opiates!), and it suffers the usual herbal problem of being an endlessly variable food product and not a specific chemical with the fun risks of perhaps being poisonous, but in my reading it doesn’t seem to be particularly dangerous or have serious side-effects. A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it “a reasonable productivity enhancer”) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro’s “Starter Pack: Test Drive” (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom’s apparently chewed, but the powders are brewed as a tea. I started with the 10g of ‘Vitality Enhanced Blend’, a sort of tan dust. Used 2 little-spoonfuls (dust tastes a fair bit like green/oolong tea dust) into the tea mug and then some boiling water. A minute of steeping and… bleh. Tastes sort of musty and sour. (I see why people recommended sweetening it with honey.) The effects? While I might’ve been more motivated - I hadn’t had caffeine that day and was a tad under the weather, a feeling which seemed to go away perhaps half an hour after starting - I can’t say I experienced any nausea or very noticeable effects. (At least the flavor is no longer quite so offensive.) 3 days later, I’m fairly miserable (slept poorly, had a hair-raising incident, and a big project was not received as well as I had hoped), so well before dinner (and after a nap) I brew up 2 wooden-spoons of ‘Malaysia Green’ (olive-color dust). I drank it down; tasted slightly better than the first. I was feeling better after the nap, and the kratom didn’t seem to change that. The next day was somewhat similar, so at 2:40 I tried out 3 spoonfuls of ‘sm00th’ (?), a straight tan powder. Like the Malaysia Green, not so bad tasting. By the second cup, my stomach is growling a little. No particular motivation. A week later: ‘Golden Sumatran’, 3 spoonfuls, a more yellowish powder. (I combined it with some tea dregs to hopefully cut the flavor a bit.) Had a paper to review that night. No (subjectively noticeable) effect on energy or productivity. I tried 4 spoonfuls at noon the next day; nothing except a little mental tension, for lack of a better word. I think that was just the harbinger of what my runny nose that day and the day before was, a head cold that laid me low during the evening. 4 spoons of ‘Thai Red Vein’ at 1:30 PM; cold hasn’t gone away but the acetaminophen was making it bearable. 4 spoons of ‘Enriched Thai’ (brown) at 8PM. Steeped 15 minutes, drank; no effect - I have to take a break to watch 3 Mobile Suit Gundam episodes before I even feel like working. 5 spoons of ‘Enriched Sumatran’ (tannish-brown) at 3:10 PM; especially sludgy this time, the Sumatran powder must be finer than the other. 4 spoons ‘Synergy’ (a “Premium Whole Leaf Blend”) at 11:20 AM; by 12:30 PM I feel quite tired and like I need to take a nap (previous night’s sleep was slightly above average, 96 ZQ). 5 spoons ‘Essential Indo’ (olive green) at 1:50 PM; no apparent effect except perhaps some energy for writing (but then a vague headache). At dose #9, I’ve decided to give up on kratom. It is possible that it is helping me in some way that careful testing (eg. dual n-back over weeks) would reveal, but I don’t have a strong belief that kratom would help me (I seem to benefit more from stimulants, and I’m not clear on how an opiate-bearer like kratom could stimulate me). So I have no reason to do careful testing. Oh well.

Lion’s Mane mushroom Hericium erinaceus (Examine.com) was recommended strongly by several on the ImmInst.org forums for its long-term benefits to learning, apparently linked to Nerve growth factor. Highly speculative stuff, and it’s unclear whether the mushroom powder I bought was the right form to take (ImmInst.org discussions seem to universally assume one is taking an alcohol or hotwater extract). It tasted nice, though, and I mixed it into my sleeping pills (which contain melatonin & tryptophan). I’ll probably never know whether the $30 for 0.5lb was well-spent or not. (I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I’m shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.)

Lithium Lithium is a well-known mood stabilizer & suicide preventative; some research suggests lithium may be a cognitively-protective nutrient and on population levels chronic lithium consumption (through drinking water) predicts lower levels of mental illness, violence, & suicide. Main article: Lithium. Lithium orotate is sold commercially in low-doses; I purchased 200 pills with 5mg of lithium each. (To put this dosage in comparison, the therapeutic psychiatric doses of lithium are around 500mg and roughly 100x larger.) The pills are small and tasteless, and not at all hard to take. Lithium experiment I experiment with a blind random trial of 5mg lithium orotate looking for effects on mood and various measures of productivity. There is no detectable effect, good or bad. Some suggested that the lithium would turn me into a ‘zombie’, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I’d have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I’m not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn’t notice any large change in emotional affect or energy levels. And it may’ve helped my motivation (though I am also trying out the tyrosine). The effect? 3 or 4 weeks later, I’m not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn’t expect to. An effect? Possibly. A real experiment is called for. Design Most of the reported benefits of lithium are impossible for me to test: rates of suicide and Parkinson’s are right out, but so is crime and neurogenesis (the former is too rare & unusual, the latter too subtle & hard to measure), and likewise potential negatives. So we could measure: mood, via daily self-report; should increase The principal metric would be ‘mood’, however defined. Zeo’s web interface & data export includes a field for ‘Day Feel’, which is a rating 1-5 of general mood & quality of day. I can record a similar metric at the end of each day. 1-5 might be a little crude even with a year of data, so a more sophisticated measure might be in order. The first mood study is paywalled so I’m not sure what they used, but Shiotsuki 2008 used State-Trait of Anxiety Inventory (STAI) and Profiles of Mood States Test (POMS). The full POMS sounds too long to use daily, but the Brief POMS might work. In the original 1987 paper “A brief POMS measure of distress for cancer patients”, patients answering this questionnaire had a mean total mean of 10.43 (standard deviation 8.87). Is this the best way to measure mood? I’ve asked Seth Roberts; he suggested using a 0-100 scale, but personally, there’s no way I can assess my mood on 0-100. My mood is sufficiently stable (to me) that 0-5 is asking a bit much, even. I ultimately decided to just go with the simple 0-5 scale, although it seems to have turned out to be more of a 2-4 scale! Apparently I’m not very good at introspection. long-term memory (Mnemosyne 2.0’s statistics); could increase (neurogenesis), do nothing (null result), or decrease (metal poisoning) working memory (dual n-back scores via Brain Workshop ); like long-term memory sleep (Zeo); should increase (via mood improvement) time procrastinating on computer (arbtt daemon every 10-40 seconds records open & active windows; these statistics can be parsed into categories like work or play. Total time on latter categories could be a useful metric. A second metric would be number of commits to the gwern.net source repository.) Lithium is somewhat persistent in the body, and its effects are not acute especially in low doses; this calls for long blocked trials. The blood half-life is 12-36 hours; hence two or three days ought to be enough to build up and wash out. A week-long block is reasonable since that gives 5 days for effects to manifest, although month-long blocks would not be a bad choice either. (I prefer blocks which fit in round periods because it makes self-experiments easier to run if the blocks fit in normal time-cycles like day/week/month. The most useless self-experiment is the one abandoned halfway.) With subtle effects, we need a lot of data, so we want at least half a year (6 blocks) or better yet, a year (12 blocks); this requires 180 actives and 180 placebos. This is easily covered by $11 for “Doctor’s Best Best Lithium Orotate (5mg), 200-Count” (more precisely, “Lithium 5mg (from 125mg of lithium orotate)”) and $14 for 1000x1g empty capsules (purchased February 2012). For convenience I settled on 168 lithium & 168 placebos (7 pill-machine batches, 14 batches total); I can use them in 24 paired blocks of 7-days/1-week each (48 total blocks/48 weeks). The lithium expiration date is October 2014, so that is not a problem The methodology would be essentially the same as the vitamin D in the morning experiment: put a multiple of 7 placebos in one container, the same number of actives in another identical container, hide & randomly pick one of them, use container for 7 days then the other for 7 days, look inside them for the label to determine which period was active and which was placebo, refill them, and start again. VoI For background on “value of information” calculations, see the Adderall calculation. Low-dose lithium orotate is extremely cheap, ~$10 a year. There is some research literature on it improving mood and impulse control in regular people, but some of it is epidemiological (which implies considerable unreliability); my current belief is that there is probably some effect size, but at just 5mg, it may be too tiny to matter. I have ~40% belief that there will be a large effect size, but I’m doing a long experiment and I should be able to detect a large effect size with >75% chance. So, the formula is NPV of the difference between taking and not taking, times quality of information, times expectation: 10−0ln1.05⋅0.75⋅0.40=61.4, which justifies a time investment of less than 9 hours. As it happens, it took less than an hour to make the pills & placebos, and taking them is a matter of seconds per week, so the analysis will be the time-consuming part. This one may actually turn a profit. Data first pair first block started and pill taken: 2012-05-11 - 19 May: 1 20 May - 27: 0 second pair first block started and pill taken: 29 May - 4 June: 1 second block: 5 June - 11 June: 0 third pair first block: 12 June - 18 June: 1 second block: 19 June - 25 June: 0 fourth pair first block: 26 June - 2 July: 1 second block: 3 July - 8 July: 0 fifth pair first block: 13 July - 20 July: 1 second block: 21 July - 27 July: 0 sixth pair first block: 28 July - 3 August: 0 second block: 4 August - 10 August: 1 seventh pair first block: 11 August - 17 August: 1 second block: 18 August - 24 August: 0 eighth pair first block: 25 August - 31 August: 1 second block: 1 September - 4 September, stopped until 24 September, finished 25 September: 0 I interrupted the lithium self-experiment until March 2013 in order to run the LSD microdosing self-experiment without a potential confound; ninth block pair: 2013-03-12 - 18 March: 1 19 March - 25 March: 0 tenth pair: 26 March - 1 April: 0 2 April - 8 April: 1 eleventh pair: 9 April - 15 April: 0 16 April - 21 April: 1 twelfth pair: 22 April - 28 April: 1 29 April - 5 May: 0 thirteenth pair: 6 May - 12 May: 0 13 May - 19 May: 1 fourteenth pair: 20 May - 26 May: 1 27 May - 2 June: 0 fifteenth: 5 June - 11 June: 0 12 June - 18 June: 1 sixteenth: 19 June - 25 June: 0 26 June - 2 July: 1 seventeenth: 3 July - 9 July: 0 10 July - 16 July: 1 eighteenth: 17 July - 23 July: 0 24 July - 28 July, 8 August - 9 August: 1 nineteenth: 10 August - 16 August: 0 17 August - 23 August: 1 twentieth: 24 August - 30 August: 0 3 September - 6 September: 1 twenty-first: 7 September - 13 September: 1 14 September - 20 September: 0 twenty-second: 21 September - 27 September: 0 28 September - 4 October: 1 twenty-third: 5 October - 11 October: 0 12 October - 18 October: 1 twenty-fourth: 20 - 26 October: 0 27 October - 2 November: 1 Analysis Preprocessing lithium: hand-generated MP: hand-edited into mp.csv Mnemosyne daily recall scores: extracted from the database: sqlite3 -batch ~/.local/share/mnemosyne/default.db \ "SELECT timestamp,easiness,grade FROM log WHERE event_type==9;" | \ tr "|" "," \ > gwern-mnemosyne.csv DNB scores: omitted because I wound up getting tired of DNB around Nov 2012 and so have no scores for most of the experiment Zeo sleep: loaded from existing export; I don’t expect any changes so I will test just the ZQ arbtt : supports the necessary scripting: arbtt-stats --logfile=/home/gwern/doc/arbtt/2012-2013.log \ --output-format = "csv" --for-each= "day" --min-percentage=0 > 2012-2013-arbtt.csv arbtt-stats --logfile=/home/gwern/doc/arbtt/2013-2014.log \ --output-format = "csv" --for-each= "day" --min-percentage=0 > 2013-2014-arbtt.csv arbtt generates cumulative time-usage for roughly a dozen overlapping tags/categories of activity of varying value. For the specific analysis, I plan to run factor analysis to extract one or two factors which seem to correlate with useful activity/work, and regress on those, instead of trying to regress on a dozen different time variables. number of commits to the gwern.net source repository cd ~/wiki/ echo "Gwern.net.patches,Date" > ~/patchlog.txt git log --after=2012-05-11 --before=2013-11-02 --format= "%ad" --date=short master | \ sort | uniq --count | tr --squeeze-repeats ' ' ',' | cut -d ',' -f 2,3 >> ~/patchlog.txt Prep work (read in, extract relevant date range, combine into a single dataset, run factor analysis to extract some potentially useful variables): lithium <- read.csv ( "lithium.csv" ) lithium $ Date <- as.Date (lithium $ Date) rm (lithium $ X) mp <- read.csv ( "mp.csv" ) mp $ Date <- as.Date (mp $ Date) mnemosyne <- read.csv ( "gwern-mnemosyne.csv" , header= FALSE , col.names = c ( "Timestamp" , "Easiness" , "Grade" ), colClasses= c ( "integer" , "numeric" , "integer" )) mnemosyne $ Date <- as.Date ( as.POSIXct (mnemosyne $ Timestamp, origin = "1970-01-01" , tz = "EST" )) mnemosyne <- mnemosyne[mnemosyne $ Date > as.Date ( "2012-05-11" ) & mnemosyne $ Date < as.Date ( "2013-11-02" ),] mnemosyne <- aggregate (mnemosyne $ Grade, by= list (mnemosyne $ Date), FUN= function (x) { mean ( as.vector (x));}) colnames (mnemosyne) <- c ( "Date" , "Mnemosyne.grade" ) zeo <- read.csv ( "https://www.gwern.net/docs/zeo/gwern-zeodata.csv" ) zeo $ Sleep.Date <- as.Date (zeo $ Sleep.Date, format= "%m/%d/%Y" ) colnames (zeo)[ 1 ] <- "Date" zeo <- zeo[zeo $ Date > as.Date ( "2012-05-11" ) & zeo $ Date < as.Date ( "2013-11-02" ),] zeo <- zeo[, c ( 1 : 10 , 23 )] zeo $ Start.of.Night <- sapply ( strsplit ( as.character (zeo $ Start.of.Night), " " ), function (x) { x[[ 2 ]] }) ## convert "06:45" to 24300 interval <- function (x) { if ( ! is.na (x)) { if ( grepl ( " s" ,x)) as.integer ( sub ( " s" , "" ,x)) else { y <- unlist ( strsplit (x, ":" )); as.integer (y[[ 1 ]]) * 60 + as.integer (y[[ 2 ]]); } } else NA } zeo $ Start.of.Night <- sapply (zeo $ Start.of.Night, interval) ## the night 'wraps around' at ~800, so let's take 0-400 and add +800 to reconstruct 'late at night' zeo[zeo $ Start.of.Night < 400 ,] $ Start.of.Night <- (zeo[zeo $ Start.of.Night < 400 ,] $ Start.of.Night + 800 ) arbtt1 <- read.csv ( "2012-2013-arbtt.csv" ) arbtt2 <- read.csv ( "2013-2014-arbtt.csv" ) arbtt <- rbind (arbtt1, arbtt2) arbtt <- arbtt[ as.Date (arbtt $ Day) >= as.Date ( "2012-05-11" ) & as.Date (arbtt $ Day) <= as.Date ( "2013-11-02" ),] ## rename Day -> Date, delete Percentage arbtt <- with (arbtt, data.frame ( Date= Day, Tag= Tag, Time= Time)) ## Convert time-lengths to second-counts: "0:16:40" to 1000 (seconds); "7:57:30" to 28650 (seconds) etc. ## We prefer units of seconds since arbtt has sub-minute resolution and not all categories ## will have a lot of time each day. interval <- function (x) { if ( ! is.na (x)) { if ( grepl ( " s" ,x)) as.integer ( sub ( " s" , "" ,x)) else { y <- unlist ( strsplit (x, ":" )); as.integer (y[[ 1 ]]) * 3600 + as.integer (y[[ 2 ]]) * 60 + as.integer (y[[ 3 ]]); } } else NA } arbtt $ Time <- sapply ( as.character (arbtt $ Time), interval) library (reshape) arbtt <- reshape (arbtt, v.names= "Time" , timevar= "Tag" , idvar= "Date" , direction= "wide" ) arbtt[ is.na (arbtt)] <- 0 arbtt $ Date <- as.Date (arbtt $ Date) patches <- read.csv ( "patchlog.txt" ) patches $ Date <- as.Date (patches $ Date) ## merge all the previous data into a single data-frame: lithiumExperiment <- merge ( merge ( merge ( merge ( merge (lithium, mp), mnemosyne, all= TRUE ), patches, all= TRUE ), arbtt, all= TRUE ), zeo, all= TRUE ) ## no patches recorded for a day == 0 patches that day lithiumExperiment[ is.na (lithiumExperiment $ Gwern.net.patches),] $ Gwern.net.patches <- 0 ## NA=I didn't do SRS that day; but that is bad and should be penalized! lithiumExperiment[ is.na (lithiumExperiment $ Mnemosyne.grade),] $ Mnemosyne.grade <- 0 productivity <- lithiumExperiment[, c ( 3 , 5 : 22 )] library (psych) ## for factor analysis nfactors (productivity) # VSS complexity 1 achieves a maximum of 0.58 with 14 factors # VSS complexity 2 achieves a maximum of 0.67 with 14 factors # The Velicer MAP achieves a minimum of 0.02 with 1 factors # Empirical BIC achieves a minimum of -304.3 with 4 factors # Sample Size adjusted BIC achieves a minimum of -97.84 with 7 factors # # Statistics by number of factors # vss1 vss2 map dof chisq prob sqresid fit RMSEA BIC SABIC complex eChisq eRMS # 1 0.16 0.00 0.016 152 1.3e+03 2.6e-190 20.4 0.16 0.122 389.4 871.9 1.0 2.1e+03 1.1e-01 # 2 0.27 0.31 0.022 134 7.8e+02 1.9e-91 16.7 0.31 0.095 -65.2 360.1 1.3 1.1e+03 7.9e-02 # 3 0.30 0.40 0.021 117 4.9e+02 5.2e-47 14.3 0.41 0.078 -247.2 124.2 1.6 7.0e+02 6.2e-02 # 4 0.39 0.47 0.024 101 2.5e+02 4.1e-14 12.1 0.50 0.052 -389.8 -69.2 1.7 3.4e+02 4.3e-02 # 5 0.39 0.51 0.028 86 1.9e+02 2.5e-10 11.2 0.54 0.049 -347.4 -74.4 1.7 2.4e+02 3.6e-02 # 6 0.41 0.53 0.034 72 1.4e+02 7.9e-06 10.3 0.57 0.041 -317.3 -88.8 1.6 1.7e+02 3.1e-02 # 7 0.44 0.54 0.041 59 8.6e+01 1.2e-02 9.6 0.60 0.030 -285.1 -97.8 1.8 1.1e+02 2.5e-02 # 8 0.40 0.52 0.050 47 1.1e+02 1.4e-07 9.9 0.59 0.053 -181.2 -32.0 2.0 2.0e+02 3.3e-02 # 9 0.48 0.57 0.063 36 4.6e+01 1.1e-01 8.3 0.66 0.024 -180.2 -65.9 1.7 6.0e+01 1.8e-02 # 10 0.51 0.62 0.079 26 1.9e+01 8.3e-01 7.2 0.70 0.000 -144.6 -62.1 1.6 1.9e+01 1.0e-02 # 11 0.52 0.62 0.098 17 1.4e+01 6.8e-01 6.7 0.72 0.000 -93.2 -39.3 1.7 1.5e+01 9.0e-03 # 12 0.52 0.61 0.124 9 1.1e+01 3.1e-01 6.7 0.72 0.020 -46.1 -17.5 1.6 1.3e+01 8.3e-03 # 13 0.48 0.61 0.163 2 4.9e+00 8.6e-02 6.3 0.74 0.053 -7.7 -1.3 1.8 6.2e+00 5.8e-03 # 14 0.58 0.67 0.210 -4 7.5e-03 NA 4.9 0.80 NA NA NA 1.8 9.0e-03 2.2e-04 # 15 0.56 0.64 0.293 -9 4.6e-06 NA 5.3 0.78 NA NA NA 2.0 6.1e-06 5.7e-06 # 16 0.53 0.62 0.465 -13 8.7e-07 NA 5.5 0.77 NA NA NA 2.1 8.6e-07 2.2e-06 # 17 0.51 0.61 0.540 -16 9.3e-12 NA 5.6 0.77 NA NA NA 2.1 1.1e-11 7.8e-09 # 18 0.51 0.61 1.000 -18 7.0e-10 NA 5.6 0.77 NA NA NA 2.1 7.8e-10 6.5e-08 # 19 0.51 0.61 NA -19 0.0e+00 NA 5.6 0.77 NA NA NA 2.1 6.2e-25 1.8e-15 # eCRMS eBIC # 1 0.112 1107.9 # 2 0.089 303.3 # 3 0.075 -31.6 # 4 0.055 -300.5 # 5 0.050 -304.3 # 6 0.047 -280.3 # 7 0.042 -257.4 # 8 0.062 -97.2 # 9 0.039 -167.1 # 10 0.026 -144.7 # 11 0.028 -92.1 # 12 0.036 -44.0 # 13 0.054 -6.4 # 14 NA NA # 15 NA NA # 16 NA NA # 17 NA NA # 18 NA NA # 19 NA NA factorization <- fa (productivity, nfactors= 4 ); factorization # Standardized loadings (pattern matrix) based upon correlation matrix # MR3 MR1 MR2 MR4 h2 u2 com # MP 0.05 0.01 -0.02 0.34 0.1241 0.876 1.1 # Gwern.net.patches -0.04 0.01 0.01 0.48 0.2241 0.776 1.0 # Time.WWW 0.98 -0.04 -0.10 0.02 0.9778 0.022 1.0 # Time.X 0.49 0.29 0.47 -0.03 0.5801 0.420 2.6 # Time.IRC 0.35 -0.06 -0.14 0.16 0.1918 0.808 1.8 # Time.Writing 0.04 -0.01 0.04 0.69 0.4752 0.525 1.0 # Time.Stats 0.42 -0.10 0.30 0.01 0.2504 0.750 1.9 # Time.PDF -0.09 -0.05 0.98 0.00 0.9791 0.021 1.0 # Time.Music 0.10 -0.10 0.02 0.03 0.0196 0.980 2.2 # Time.Rec 0.03 0.99 -0.03 -0.02 0.9950 0.005 1.0 # Time.SRS 0.06 -0.06 0.07 0.10 0.0209 0.979 3.4 # Time.Sysadmin 0.22 0.13 -0.04 0.13 0.0953 0.905 2.4 # Time.DNB -0.04 -0.05 -0.06 0.07 0.0149 0.985 3.3 # Time.Bitcoin 0.15 -0.07 -0.07 -0.04 0.0306 0.969 2.1 # Time.Blackmarkets 0.18 -0.09 -0.08 0.02 0.0470 0.953 1.9 # Time.Programming -0.04 0.05 -0.04 0.43 0.1850 0.815 1.1 # Time.Backups -0.09 0.06 -0.01 0.04 0.0114 0.989 2.4 # Time.Umineko -0.16 0.71 -0.03 0.06 0.5000 0.500 1.1 # Time.Typing -0.03 -0.04 0.02 -0.01 0.0034 0.997 2.4 # # MR3 MR1 MR2 MR4 # SS loadings 1.67 1.64 1.33 1.08 # Proportion Var 0.09 0.09 0.07 0.06 # Cumulative Var 0.09 0.17 0.24 0.30 # Proportion Explained 0.29 0.29 0.23 0.19 # Cumulative Proportion 0.29 0.58 0.81 1.00 # # With factor correlations of # MR3 MR1 MR2 MR4 # MR3 1.00 0.12 -0.05 0.10 # MR1 0.12 1.00 0.07 -0.08 # MR2 -0.05 0.07 1.00 -0.08 # MR4 0.10 -0.08 -0.08 1.00 # # Mean item complexity = 1.8 # Test of the hypothesis that 4 factors are sufficient. # # The degrees of freedom for the null model are 171 # and the objective function was 3.08 with Chi Square of 1645 # The degrees of freedom for the model are 101 and the objective function was 0.46 # # The root mean square of the residuals (RMSR) is 0.04 # The df corrected root mean square of the residuals is 0.06 # # The harmonic number of observations is 538 with the empirical chi square 332.7 with prob < 1.6e-26 # The total number of observations was 542 with MLE Chi Square = 246 with prob < 4.1e-14 # # Tucker Lewis Index of factoring reliability = 0.832 # RMSEA index = 0.052 and the 90 % confidence intervals are 0.043 0.06 # BIC = -389.8 # Fit based upon off diagonal values = 0.88 # Measures of factor score adequacy # MR3 MR1 MR2 MR4 # Correlation of scores with factors 0.99 1.00 0.99 0.79 # Multiple R square of scores with factors 0.98 0.99 0.98 0.63 # Minimum correlation of possible factor scores 0.95 0.99 0.96 0.25 ## I interpret MR3=Internet+Stats usage; MR1=goofing off; MR2=reading/stats; MR4=writing ## I don't care about MR1, so we'll look for effects on 3/2/4: lithiumExperiment $ MR3 <- predict (factorization, data= productivity)[, 1 ] lithiumExperiment $ MR2 <- predict (factorization, data= productivity)[, 3 ] lithiumExperiment $ MR4 <- predict (factorization, data= productivity)[, 4 ] write.csv (lithiumExperiment, file= "2012-lithium-experiment.csv" , row.names= FALSE ) Test lithiumExperiment <- read.csv ( "https://www.gwern.net/docs/lithium/2012-lithium-experiment.csv" ) l1 <- lm ( cbind (MP, Mnemosyne.grade, Gwern.net.patches, ZQ, MR3, MR2, MR4) ~ Lithium, data= lithiumExperiment) summary (l1) # Response MP : # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 3.0613 0.0591 51.8 <2e-16 # Lithium -0.0425 0.0841 -0.5 0.61 # # Residual standard error: 0.755 on 320 degrees of freedom # (220 observations deleted due to missingness) # Multiple R-squared: 0.000796, Adjusted R-squared: -0.00233 # F-statistic: 0.255 on 1 and 320 DF, p-value: 0.614 # # # Response Mnemosyne.grade : # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 3.158 0.120 26.41 <2e-16 # Lithium -0.141 0.170 -0.83 0.41 # # Residual standard error: 1.53 on 320 degrees of freedom # (220 observations deleted due to missingness) # Multiple R-squared: 0.00214, Adjusted R-squared: -0.000975 # F-statistic: 0.687 on 1 and 320 DF, p-value: 0.408 # # # Response Gwern.net.patches : # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 3.8712 0.3271 11.83 <2e-16 # Lithium 0.0345 0.4655 0.07 0.94 # # Residual standard error: 4.18 on 320 degrees of freedom # (220 observations deleted due to missingness) # Multiple R-squared: 1.72e-05, Adjusted R-squared: -0.00311 # F-statistic: 0.00549 on 1 and 320 DF, p-value: 0.941 # # # Response ZQ : # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 91.773 1.024 89.66 <2e-16 # Lithium 0.523 1.457 0.36 0.72 # # Residual standard error: 13.1 on 320 degrees of freedom # (220 observations deleted due to missingness) # Multiple R-squared: 0.000402, Adjusted R-squared: -0.00272 # F-statistic: 0.129 on 1 and 320 DF, p-value: 0.72 # # # Response MR3 : # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) -0.0258 0.0691 -0.37 0.71 # Lithium 0.0657 0.0983 0.67 0.50 # # Residual standard error: 0.882 on 320 degrees of freedom # (220 observations deleted due to missingness) # Multiple R-squared: 0.00139, Adjusted R-squared: -0.00173 # F-statistic: 0.447 on 1 and 320 DF, p-value: 0.504 # # # Response MR2 : # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.0187 0.0788 0.24 0.81 # Lithium 0.0435 0.1121 0.39 0.70 # # Residual standard error: 1.01 on 320 degrees of freedom # (220 observations deleted due to missingness) # Multiple R-squared: 0.00047, Adjusted R-squared: -0.00265 # F-statistic: 0.15 on 1 and 320 DF, p-value: 0.698 # # # Response MR4 : # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.000209 0.052772 0.00 1.00 # Lithium -0.073464 0.075099 -0.98 0.33 # # Residual standard error: 0.674 on 320 degrees of freedom # (220 observations deleted due to missingness) # Multiple R-squared: 0.00298, Adjusted R-squared: -0.000134 # F-statistic: 0.957 on 1 and 320 DF, p-value: 0.329 summary ( manova (l1)) # Df Pillai approx F num Df den Df Pr(>F) # Lithium 1 0.009477169 0.4291862 7 314 0.88373 # Residuals 320 No variable reaches statistical-significance, the coefficient signs are inconsistent, and the MANOVA indicates no overall improvement by using the lithium variable. Conclusion There were no observable effects, either positive or beneficial, to the lithium orotate doses. This is consistent with my subjective experience. So I will not be using lithium orotate anymore.

LLLT An unusual intervention is infrared/near-infrared light of particular wavelengths (LLLT), theorized to assist mitochondrial respiration and yielding a variety of therapeutic benefits. Some have suggested it may have cognitive benefits. LLLT sounds strange but it’s simple, easy, cheap, and just plausible enough it might work. I tried out LLLT treatment on a sporadic basis 2013-2014, and statistically, usage correlated strongly & statistically-significantly with increases in my daily self-ratings, and not with any sleep disturbances. Excited by that result, I did a randomized self-experiment 2014-2015 with the same procedure, only to find that the causal effect was weak or non-existent. I have stopped using LLLT as likely not worth the inconvenience. Low level laser therapy (LLLT) is a curious treatment based on the application of a few minutes of weak light in specific near-infrared wavelengths (the name is a bit of a misnomer as LEDs seem to be employed more these days, due to the laser aspect being unnecessary and LEDs much cheaper). Unlike most kinds of light therapy, it doesn’t seem to have anything to do with circadian rhythms or zeitgebers. Proponents claim efficacy in treating physical injuries, back pain, and numerous other ailments, recently extending it to case studies of mental issues like brain fog. (It’s applied to injured parts; for the brain, it’s typically applied to points on the skull like F 3 or F 4 .) And LLLT is, naturally, completely safe without any side effects or risk of injury. To say that this all sounds dubious would be an understatement. (My first reaction was that LLLT and lostfalco’s other proposals were probably the stupidest thing I’d seen all month.) The research literature, while copious, is messy and varied: methodologies and devices vary substantially, sample sizes are tiny, the study designs vary from paper to paper, metrics are sometimes comically limited (one study measured speed of finishing a RAPM IQ test but not scores), blinding is rare and unclear how successful, etc. Relevant papers include Chung et al 2012, Rojas & Gonzalez-Lima 2013, & Gonzalez-Lima & Barrett 2014. Another Longecity user ran a self-experiment, with some design advice from me, where he performed a few cognitive tests over several periods of LLLT usage (the blocks turned out to be ABBA), using his father and towels to try to blind himself as to condition. I analyzed his data, and his scores did seem to improve, but his scores improved so much in the last part of the self-experiment I found myself dubious as to what was going on - possibly a failure of randomness given too few blocks and an temporal exogenous factor in the last quarter which was responsible for the improvement. While the mechanism is largely unknown, one commonly mechanism possibility is that light of the relevant wavelengths is preferentially absorbed by the protein cytochrome c oxidase, which is a key protein in mitochondrial metabolism and production of ATP, substantially increasing output, and this extra output presumably can be useful for cellular activities like healing or higher performance. I was contacted by the Longecity user lostfalco, and read through some of his writings on the topic. I had never heard of LLLT before, but the mitochondria mechanism didn’t sound impossible (although I wondered whether it made sense at a quantity level ), and there was at least some research backing it; more importantly, lostfalco had discovered that devices for LLLT could be obtained as cheap as $15. (Clearly no one will be getting rich off LLLT or affiliate revenue any time soon.) Nor could I think of any way the LLLT could be easily harmful: there were no drugs involved, physical contact was unnecessary, power output was too low to directly damage through heating, and if it had no LLLT-style effect but some sort of circadian effect through hitting photoreceptors, using it in the morning wouldn’t seem to interfere with sleep. Since LLLT was so cheap, seemed safe, was interesting, just trying it would involve minimal effort, and it would be a favor to lostfalco, I decided to try it. I purchased off eBay a $13 “48 LED illuminator light IR Infrared Night Vision+Power Supply For CCTV. Auto Power-On Sensor, only turn-on when the surrounding is dark. IR LED wavelength: 850nm. Powered by DC 12V 500mA adaptor.” It arrived in 4 days, on 2013-09-07. It fits handily in my palm. My cellphone camera verified it worked and emitted infrared - important because there’s no visible light at all (except in complete darkness I can make out a faint red light), no noise, no apparent heat (it took about 30 minutes before the lens or body warmed up noticeably when I left it on a table). This was good since I worried that there would be heat or noise which made blinding impossible; all I had to do was figure out how to randomly turn the power on and I could run blinded self-experiments with it. My first time was relatively short: 10 minutes around the F 3 /F 4 points, with another 5 minutes to the forehead. Awkward holding it up against one’s head, and I see why people talk of “LED helmets”, it’s boring waiting. No initial impressions except maybe feeling a bit mentally cloudy, but that goes away within 20 minutes of finishing when I took a nap outside in the sunlight. Lostfalco says “Expectations: You will be tired after the first time for 2 to 24 hours. It’s perfectly normal.”, but I’m not sure - my dog woke me up very early and disturbed my sleep, so maybe that’s why I felt suddenly tired. On the second day, I escalated to 30 minutes on the forehead, and tried an hour on my finger joints. No particular observations except less tiredness than before and perhaps less joint ache. Third day: skipped forehead stimulation, exclusively knee & ankle. Fourth day: forehead at various spots for 30 minutes; tiredness 5/6/7/8th day (11/12/13/4): skipped. Ninth: forehead, 20 minutes. No noticeable effects. Pilot At this point I began to get bored with it and the lack of apparent effects, so I began a pilot trial: I’d use the LED set for 10 minutes every few days before 2PM, record, and in a few months look for a correlation with my daily self-ratings of mood/productivity (for 2.5 years I’ve asked myself at the end of each day whether I did more, the usual, or less work done that day than average, so 2=below-average, 3=average, 4=above-average; it’s ad hoc, but in some factor analyses I’ve been playing with, it seems to load on a lot of other variables I’ve measured, so I think it’s meaningful). On 2014-03-15, I disabled light sensor: the complete absence of subjective effects since the first sessions made me wonder if the LED device was even turning on - a little bit of ambient light seems to disable it thanks to the light sensor. So I stuffed the sensor full of putty, verified it was now always-on with the cellphone camera, and began again; this time it seemed to warm up much faster, making me wonder if all the previous sessions’ sense of warmth was simply heat from my hand holding the LEDs In late July 2014, I was cleaning up my rooms and was tired of LLLT, so I decided to chuck the LED device. But before I did that, I might as well analyze the data. That left me with 329 days of data. The results are that (correcting for the magnesium citrate self-experiment I was running during the time period which did not turn out too great) days on which I happened to use my LED device for LLLT were much better than regular days. Below is a graph showing the entire MP dataseries with LOESS-smoothed lines showing LLLT vs non-LLLT days: Daily productivity self-rating (higher=better) over time, split by LLLT usage that day (2013–2014) LLLT pilot analysis The correlation of LLLT usage with higher MP self-rating is fairly large (r=0.19 / d=0.455) and statistically-significant (p=0.0006). I have no particularly compelling story for why this might be a correlation and not causation. It could be placebo, but I wasn’t expecting that. It could be selection effect (days on which I bothered to use the annoying LED set are better days) but then I’d expect the off-days to be below-average and compared to the 2 years of trendline before, there doesn’t seem like much of a fall. The R code: lllt <- read.csv ( "https://www.gwern.net/docs/nootropics/2014-08-03-lllt-correlation.csv" ) l <- lm (MP ~ LLLT + as.logical (Magnesium.citrate) + as.integer (Date) + as.logical (Magnesium.citrate) : as.integer (Date), data= lllt); summary (l) # ...Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 4.037702597 0.616058589 6.55409 5.0282e-10 # LLLTTRUE 0.330923350 0.095939634 3.44929 0.00069087 # as.logical(Magnesium.citrate)TRUE 0.963379487 0.842463568 1.14353 0.25424378 # as.integer(Date) -0.001269089 0.000880949 -1.44059 0.15132856 # as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.001765953 0.001213804 -1.45489 0.14733212 0.330923350 / sd (lllt $ MP, na.rm= TRUE ) # [1] 0.455278787 cor.test (lllt $ MP, as.integer (lllt $ LLLT)) # # Pearson`s product-moment correlation # # data: lllt$MP and as.integer(lllt$LLLT) # t = 3.4043, df = 327, p-value = 0.0007458 # alternative hypothesis: true correlation is not equal to 0 # 95% confidence interval: # 0.0784517682 0.2873891665 # sample estimates: # cor # 0.185010342 ## check whether there's odd about non-LLLT days by expanding to include baseline llltImputed <- lllt llltImputed[ is.na (llltImputed)] <- 0 llltImputed[llltImputed $ MP == 0 ,] $ MP <- 3 # clean up an outlier using median summary ( lm (MP ~ LLLT + as.logical (Magnesium.citrate) + as.integer (Date) + as.logical (Magnesium.citrate) : as.integer (Date), data= llltImputed)) # ...Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 2.959172295 0.049016571 60.37085 < 2.22e-16 # LLLT 0.336886970 0.083731179 4.02344 6.2212e-05 # as.logical(Magnesium.citrate)TRUE 2.155586397 0.619675529 3.47857 0.00052845 # as.integer(Date) 0.000181441 0.000103582 1.75166 0.08017565 # as.logical(Magnesium.citrate)TRUE:as.integer(Date) -0.003373682 0.000904342 -3.73054 0.00020314 power.t.test ( power= 0.8 , delta= ( 0.336886970 / sd (lllt $ MP, na.rm= TRUE )), type= "paired" , alternative= "one.sided" ) # # Paired t test power calculation # # n = 30.1804294 # delta = 0.463483435 # sd = 1 # sig.level = 0.05 # power = 0.8 # alternative = one.sided # # NOTE : n is number of *pairs*, sd is std.dev. of *differences* within pairs library (ggplot2) llltImputed $ Date <- as.Date (llltImputed $ Date) ggplot ( data = llltImputed, aes ( x= Date, y= MP, col= as.logical (llltImputed $ LLLT))) + geom_point ( size= I ( 3 )) + stat_smooth () + scale_colour_manual ( values= c ( "gray49" , "green" ), name = "LLLT" ) So, I have started a randomized experiment; should take 2 months, given the size of the correlation. If that turns out to be successful too, I’ll have to look into methods of blinding - for example, some sort of electronic doohickey which turns on randomly half the time and which records whether it’s on somewhere one can’t see. (Then for the experiment, one hooks up the LED, turns the doohickey ‘on’, and applies directly to forehead, checking the next morning to see whether it was really on or off). Sleep One reader notes that for her, the first weeks of LLLT usage seemed to be accompanied by sleeping longer than usual. Did I experience anything similar? There doesn’t appear to be any particular effect on total sleep or other sleep variables: lllt <- read.csv ( "https://www.gwern.net/docs/nootropics/2014-08-03-lllt-correlation.csv" ) zeo <- read.csv ( "https://www.gwern.net/docs/zeo/gwern-zeodata.csv" ) lllt $ Date <- as.Date (lllt $ Date) zeo $ Date <- as.Date (zeo $ Sleep.Date, format= "%m/%d/%Y" ) sleepLLLT <- merge (lllt, zeo, all= TRUE ) l <- lm ( cbind (Start.of.Night, Time.to.Z, Time.in.Wake, Awakenings, Time.in.REM, Time.in.Light, Time.in.Deep, Total.Z, ZQ, Morning.Feel) ~ LLLT, data= sleepLLLT) summary ( manova (l)) ## Df Pillai approx F num Df den Df Pr(>F) ## LLLT 1 0.04853568 1.617066 10 317 0.10051 ## Residuals 326 library (ggplot2) qplot ( sleepLLLT $ Date, sleepLLLT $ Total.Z, color= sleepLLLT $ LLLT) LLLT pilot factor analysis Factor-analyzing several other personal datasets into 8 factors while omitting the previous MP variable, I find LLLT correlates with personal-productivity-related factors, but less convincingly than MP, suggesting the previous result is not quite as good as it seems. My worry about the MP variable is that, plausible or not, it does seem relatively weak against manipulation; other variables I could look at, like arbtt window-tracking of how I spend my computer time, # or size of edits to my files, or spaced repetition performance, would be harder to manipulate. If it’s all due to MP, then if I remove the MP and LLLT variables, and summarize all the other variables with factor analysis into 2 or 3 variables, then I should see no increases in them when I put LLLT back in and look for a correlation between the factors & LLLT with a multivariate regression. Preparation of data: lllt <- read.csv ( "~/wiki/docs/nootropics/2014-08-03-lllt-correlation.csv" , colClasses= c ( "Date" , rep ( "integer" , 4 ), "logical" )) lllt <- data.frame ( Date= lllt $ Date, LLLT= lllt $ LLLT) mp <- read.csv ( "~/selfexperiment/mp.csv" , colClasses= c ( "Date" , "integer" )) creativity <- read.csv ( "~/selfexperiment/dailytodo-marchjunecreativity.csv" , colClasses= c ( "Date" , "integer" )) mnemosyne <- read.csv ( "~/selfexperiment/mnemosyne.csv" , header= FALSE , col.names = c ( "Timestamp" , "Easiness" , "Grade" ), colClasses= c ( "integer" , "numeric" , "integer" )) mnemosyne $ Timestamp <- as.POSIXct (mnemosyne $ Timestamp, origin = "1970-01-01" , tz = "EST" ) mnemosyne $ Date <- as.Date (mnemosyne $ Timestamp) mnemosyne <- aggregate (Grade ~ Date, mnemosyne, mean) mnemosyne $ Average.Spaced.repetition.score <- mnemosyne $ Grade rm (mnemosyne $ Grade) dnb <- read.csv ( "~/doc/brainworkshop/data/stats.txt" , header= FALSE ) dnb $ V1 <- as.POSIXct (dnb $ V1, format= "%F %R:%S" ) dnb <- dnb[ ! is.na (dnb $ V1),] dnb <- with (dnb, data.frame ( Timestamp= V1, Nback.type= V2, Percentage= V3)) dnb $ Date <- as.Date (dnb $ Timestamp) dnbDaily <- aggregate (Percentage ~ Date + Nback.type, dnb, mean) arbtt1 <- read.csv ( "~/selfexperiment/2012-2013-arbtt.txt" ) arbtt2 <- read.csv ( "~/selfexperiment/2013-2014-arbtt.txt" ) arbtt <- rbind (arbtt1, arbtt2) rm (arbtt $ Percentage) interval <- function (x) { if ( ! is.na (x)) { if ( grepl ( " s" ,x)) as.integer ( sub ( " s" , "" ,x)) else { y <- unlist ( strsplit (x, ":" )); as.integer (y[[ 1 ]]) * 3600 + as.integer (y[[ 2 ]]) * 60 + as.integer (y[[ 3 ]]); } } else NA } arbtt $ Time <- sapply ( as.character (arbtt $ Time), interval) library (reshape) arbtt <- reshape (arbtt, v.names= "Time" , timevar= "Tag" , idvar= "Day" , direction= "wide" ) arbtt $ Date <- as.Date (arbtt $ Day) rm (arbtt $ Day) arbtt[ is.na (arbtt)] <- 0 patches <- read.csv ( "~/selfexperiment/patchlog-gwern.net.txt" , colClasses= c ( "integer" , "Date" )) patches $ Gwern.net.patches.log <- log1p (patches $ Gwern.net.patches) # modified lines per day is much harder: state machine to sum lines until it hits the next date patchCount <- scan ( file= "~/selfexperiment/patchlog-linecount-gwern.net.txt" , character (), sep = "

" ) patchLines <- new.env () for (i in 1 : length (patchCount)) { if ( grepl ( " \t " , patchCount[i])) { patchLines[[date]] <- patchLines[[date]] + sum ( sapply ( strsplit (patchCount[i], " \t " ), as.integer)) } else { date <- patchCount[i] patchLines[[date]] <- 0 } } patchLines <- as.list (patchLines) patchLines <- data.frame ( Date = rep ( names (patchLines), lapply (patchLines, length)), Gwern.net.linecount= unlist (patchLines)) rm ( row.names (patchLines)) patchLines $ Date <- as.Date (patchLines $ Date) patchLines $ Gwern.net.linecount.log <- log1p (patchLines $ Gwern.net.linecount) firstDay <- patches $ Date[ 1 ]; lastDay <- patches $ Date[ nrow (patches)] patches <- merge ( merge (patchLines, patches, all= TRUE ), data.frame ( Date= seq (firstDay, lastDay, by= "day" )), all= TRUE ) # if entries are missing, they == 0 patches[ is.na (patches)] <- 0 # combine all the data: llltData <- merge ( merge ( merge ( merge ( merge (lllt, mp, all= TRUE ), creativity, all= TRUE ), dnbDaily, all= TRUE ), arbtt, all= TRUE ), patches, all= TRUE ) write.csv (llltData, file= "2014-08-08-lllt-correlation-factoranalysis.csv" , row.names= FALSE ) Factor analysis. The strategy: read in the data, drop unnecessary data, impute missing variables (data is too heterogeneous and collected starting at varying intervals to be clean), estimate how many factors would fit best, factor analyze, pick the ones which look like they match best my ideas of what ‘productive’ is, extract per-day estimates, and finally regress LLLT usage on the selected factors to look for increases. lllt <- read.csv ( "https://www.gwern.net/docs/nootropics/2014-08-08-lllt-correlation-factoranalysis.csv" ) ## the log transforms are more useful: rm (lllt $ Date, lllt $ Nback.type, lllt $ Gwern.net.linecount, lllt $ Gwern.net.patches) ## https://stats.stackexchange.com/questions/28576/filling-nas-in-a-dataset-with-column-medians-in-r imputeColumnAsMedian <- function (x){ x[ is.na (x)] <- median (x, na.rm= TRUE ) #convert the item with NA to median value from the column x #display the column } llltI <- data.frame ( apply (lllt, 2 , imputeColumnAsMedian)) library (psych) nfactors (llltI[ - c ( 1 , 2 )]) # VSS complexity 1 achieves a maximum of 0.56 with 16 factors # VSS complexity 2 achieves a maximum of 0.66 with 16 factors # The Velicer MAP achieves a minimum of 0.01 with 1 factors # Empirical BIC achieves a minimum of -280.23 with 8 factors # Sample Size adjusted BIC achieves a minimum of -135.77 with 9 factors fa.parallel (llltI[ - c ( 1 , 2 )], n.iter= 2000 ) # Parallel analysis suggests that the number of factors = 7 and the number of components = 7 ## split the difference between sample-size adjusted BIC and parallel analysis with 8: factorization <- fa (llltI[ - c ( 1 , 2 )], nfactors= 8 ); factorization # Standardized loadings (pattern matrix) based upon correlation matrix # MR6 MR1 MR2 MR4 MR3 MR5 MR7 MR8 h2 u2 com # Creativity.self.rating 0.22 0.06 -0.04 0.08 -0.04 0.02 -0.05 -0.14 0.0658 0.934 2.5 # Percentage -0.05 -0.02 0.01 0.01 0.00 -0.42 0.02 0.02 0.1684 0.832 1.0 # Time.X -0.04 0.11 0.04 0.88 -0.02 0.01 0.01 0.02 0.8282 0.172 1.0 # Time.PDF 0.02 0.99 -0.02 0.04 0.02 0.00 -0.01 -0.01 0.9950 0.005 1.0 # Time.Stats -0.10 0.21 0.12 0.16 -0.04 0.04 0.12 0.25 0.2310 0.769 4.3 # Time.IRC 0.01 -0.02 0.99 0.02 0.02 0.01 0.00 -0.01 0.9950 0.005 1.0 # Time.Writing 0.01 -0.02 0.01 0.04 -0.01 -0.03 0.68 0.04 0.4720 0.528 1.0 # Time.Rec 0.20 -0.12 -0.06 0.42 0.62 -0.02 -0.07 -0.01 0.8501 0.150 2.2 # Time.Music -0.05 0.05 0.02 0.02 -0.04 0.22 0.02 0.13 0.0909 0.909 2.0 # Time.SRS -0.07 0.09 0.08 0.00 0.00 0.08 0.06 0.16 0.0702 0.930 3.6 # Time.Sysadmin 0.05 -0.09 -0.04 0.15 0.07 0.01 0.14 0.42 0.2542 0.746 1.7 # Time.Bitcoin 0.45 0.02 0.25 -0.07 -0.03 -0.09 -0.04 0.11 0.3581 0.642 1.9 # Time.Backups 0.22 0.10 -0.08 -