Personal experience and surveys on running out of socks; discussion of socks as small example of human procrastination and irrationality, caused by lack of explicit deliberative thought where no natural triggers or habits exist. finished certainty : possible importance : 4

After running out of socks one day, I reflected on how ordinary tasks get neglected. Anecdotally and in 3 online surveys, people report often not having enough socks, a problem which correlates with rarity of sock purchases and demographic variables, consistent with a neglect/procrastination interpretation: because there is no specific time or triggering factor to replenish a shrinking sock stockpile, it is easy to run out. This reminds me of akrasia on minor tasks, ‘yak shaving’, and the nature of disaster in complex systems: lack of hard rules lets errors accumulate, without any ‘global’ understanding of the drift into disaster (or at least inefficiency). Humans on a smaller scale also ‘drift’ when they engage in System I reactive thinking & action for too long, resulting in cognitive biases. An example of drift is the generalized human failure to explore/experiment adequately, resulting in overly greedy exploitative behavior of the current local optimum. Grocery shopping provides a case study: despite large gains, most people do not explore, perhaps because there is no established routine or practice involving experimentation. Fixes for these things can be seen as ensuring that System II deliberative cognition is periodically invoked to review things at a global level, such as developing a habit of maximum exploration at first purchase of a food product, or annually reviewing possessions to note problems like a lack of socks. While socks may be small things, they may reflect big things.

Socks possess the mysterious power, like cats, of vanishing; unlike cats, they don’t get hungry and come back. So I found myself one day in summer 2013 doing laundry a week early and wasting time schlepping back & forth solely because I had run out of socks entirely and couldn’t bear walking around in dirty socks. I suddenly realized that this was a ridiculous problem to have in an age awash with cheap textiles (so cheap that clothes must be shipped to Africa or incinerated lest the thrift stores burst at the seams), and immediately went on Amazon & bought a pack of 30 pairs to refill my ‘sockpile’. This made me curious: how many other people don’t have enough socks, and why not?

I began asking people if they thought they had enough socks and quite a few people would say that they didn’t, but they hadn’t quite gotten around to it. (Although some insist Darn Tough socks changed their lives forever.)

So I began running polls, and I am not alone.

Sock Surveys An otherwise-unpublished Samsung sock survey finds that “Brits lose an average of 1.3 socks each month (and more than 15 in a year)”, implying an annual loss of ~8 pairs in the best case scenario: where you either don’t need exact matches (because all socks are the same kind) or don’t mind mismatches. If each pair is unique and one goes missing from each, then in the worst case an annual loss of 15 individual socks implies one must buy another 15 pairs. This appears to be in addition to wear-and-tear or changes in necessary type, which must be made up for. In a Twitter survey 2019-01-20–2019-01-27 of my followers, I asked: Do you have enough pairs of socks? Yes: 64% (n=689)

No: 37% (n=405) How many pairs of socks do you have? 0–10: 18% (n=118)

11–20: 46% (n=302)

21–30: 27% (n=177)

31+: 9% (n=59) How often do you buy replacement socks? Monthly: 2% (n=15)

Semi-annually: 33% (n=254)

Annually: 37% (n=285)

Less or never: 28% (n=216) Who buys your socks? Me: 75% (n=580)

Spouse/significant-other: 7% (n=54)

Relative: 16% (n=123)

Other: 2% (n=15) At least among my Twitter social circle, not having enough socks is common and a fair number of people are on the verge of sock bankruptcy. An answer to why suggests itself by the purchase details: most people are responsible for their own sock maintenance, but buy on perhaps not even an annual basis (a plurality buy ‘annually’, and the ‘semi-annually’ may be more than offset by the ‘less or never’ respondents); so it’s easy to forget and not buy socks. Is socklessness concentrated among those who must buy their own socks & do so rarely? Twitter responses are independent and not linked by username (only the aggregate %s and total n are reported), so there’s no way to know from the responses, so there’s no way to see the intercorrelations. To do that, I set up a Google Surveys survey on 2019-01-20 (CSV), asking all 4 questions in a single survey with n=130 US responses costing $100. (This is more expensive than my usual trick of asking only 1 question, and costs $1/response rather than $0.10/response, but a set of 4 single-question surveys would be the same as the Twitter survey.) Eric Jorgensen also ran a version of the survey on a personality quiz website with an international audience (ODS/CSV), with n=455. They have the same questions (with the exception of the sock count question, where his survey asked for a numeric rather than ordinal response, so I convert it back to ordinal), so I pool them for analysis: socks <- read.csv ( "https://www.gwern.net/docs/psychology/2019-01-20-gs-socks.csv" ) socks <- subset (socks, select= c ( "Question..1.Answer" , "Question..2.Answer" , "Question..3.Answer" , "Question..4.Answer" )) socks <- socks[socks $ Question.. 3. Answer != "" ,] # rm NAs socks <- socks[socks $ Question.. 4. Answer != "" ,] # rm NAs socks $ Question.. 1. Answer <- socks $ Question.. 1. Answer == "Yes" socks $ Question.. 2. Answer <- as.ordered (socks $ Question.. 2. Answer) socks $ Question.. 3. Answer <- ordered (socks $ Question.. 3. Answer, levels= c ( "monthly" , "semi-annually" , "annually" , "less/never" )) socks $ Question.. 4. Answer <- ordered (socks $ Question.. 4. Answer, levels= c ( "me" , "spouse or significant other" , "relative" , "other" )) socksI <- with (socks, data.frame ( Enough= as.integer (Question.. 1. Answer), Count= as.integer (Question.. 2. Answer), Frequency= as.integer (Question.. 3. Answer), Purchaser= as.integer (Question.. 4. Answer))) eric <- read.csv ( "https://www.gwern.net/docs/psychology/2019-01-21-eric-socksurvey.csv" ) eric $ Count <- as.integer ( ordered ( sapply (eric $ Count, function (c) { if (c <= 10 ) { "0-10" ; } else { if (c <= 20 ) { "11-20" ; } else { if (c <= 30 ) { "21-30" ; } else { "31+" ; }}}}))) ericI <- subset (eric, select= c ( "Enough" , "Count" , "Frequency" , "Purchaser" )) socksAllI <- rbind (socksI, ericI) ## Descriptive: library (skimr) skim (socksAllI) # Skim summary statistics # n obs: 599 # n variables: 4 # # ── Variable type:integer # variable missing complete n mean sd p0 p25 p50 p75 p100 hist # Count 0 599 599 2.04 0.91 1 1 2 3 4 ▆▁▇▁▁▃▁▂ # Enough 0 599 599 0.84 0.37 0 1 1 1 1 ▂▁▁▁▁▁▁▇ # Frequency 0 599 599 2.76 0.87 1 2 3 3 4 ▁▁▇▁▁▇▁▅ # Purchaser 0 599 599 1.63 0.96 1 1 1 3 4 ▇▁▁▁▁▂▁▁ # n obs: 600 # n variables: 4 # # ── Variable type:integer # variable missing complete n mean sd p0 p25 p50 p75 p100 hist # Count 0 600 600 2.04 0.92 1 1 2 3 4 ▆▁▇▁▁▃▁▂ # Enough 0 600 600 0.84 0.37 0 1 1 1 1 ▂▁▁▁▁▁▁▇ # Frequency 1 599 600 2.76 0.87 1 2 3 3 4 ▁▁▇▁▁▇▁▅ # Purchaser 0 600 600 1.63 0.96 1 1 1 3 4 ▇▁▁▁▁▂▁▁ ## Bivariate correlations: library (psych) polychoric (socksAllI) # Polychoric correlations # Enogh Count Frqnc Prchs # Enough 1.00 # Count 0.29 1.00 # Frequency -0.23 -0.08 1.00 # Purchaser 0.16 -0.01 0.17 1.00 # # with tau of # 1 2 3 4 # Enough -0.99 Inf Inf Inf # Count -Inf -0.48 0.62 1.38 # Frequency -Inf -1.60 -0.21 0.74 # Purchaser -Inf 0.45 0.64 1.71 The GS respondents have less of an issue with sock shortages than my Twitter respondents (unsurprisingly) with 15% rather than 37% sockless, and the bivariate polychoric correlations make sense to me: count and having enough correlate strongly, of course, while frequency & purchaser distance predict less socks/more risk of not having enough. What about joint relationships? brms conveniently supports ordinal predictors via “monotonic effects” in addition to supporting ordinal regression for ordinal outcomes, so there’s no problem modeling any of the variables in any combination; given the overlap of sock count & having enough, it doesn’t make much sense to use one as a predictor of the other (although extracting a factor might make sense). So to do regression from Frequency & Purchaser onto Enough & Count : library (brms) brm (Enough ~ mo (Frequency) + mo (Purchaser), family= "bernoulli" , data= socksAllI) # ...Population-Level Effects: # Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat # Intercept 2.26 0.28 1.79 2.91 1851 1.00 # moFrequency -0.93 0.37 -1.71 -0.24 1948 1.00 # moPurchaser -0.58 0.39 -1.38 0.17 3365 1.00 # ... brm (Count ~ mo (Frequency) + mo (Purchaser), family= "cumulative" , data= socksAllI) # ...Population-Level Effects: # Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat # Intercept[1] -1.03 0.17 -1.39 -0.72 2488 1.00 # Intercept[2] 0.77 0.17 0.41 1.06 2860 1.00 # Intercept[3] 2.17 0.20 1.75 2.56 3505 1.00 # moFrequency -0.52 0.25 -1.02 -0.03 2558 1.00 # moPurchaser 0.02 0.29 -0.49 0.68 3142 1.00 While different parameterizations, the message remains the same: a fair number of people do not have socks (it’s not only me), and this particularly correlates with not frequently purchasing socks. Demographics Incidentally, both the GS & Eric Jorgensen polls include some demographics data: estimated gender/age/location for GS, and ESL-speaker/country/gender for Eric Jorgensen. Those aren’t my main interest here, but how do they look? One could make some predictions based on stereotypes: women will have more socks than men, older people will be more likely to have enough socks than younger people, and there will probably be cross-country differences. Checking, older people are indeed more likely, cross-country differences are not so large as to be inferable, and there appears to be inconsistency in gender effects: men have more problems with socks in the US than internationally? Jorgensen’s data first; because of the large number of countries, heavy regularization must be used: polychoric ( subset (eric, select= c (Gender.Int, Enough, Frequency, Purchaser))) # Polychoric correlations # Gnd.I Enogh Frqnc Prchs # Gender.Int 1.00 # Enough 0.02 1.00 # Frequency -0.01 -0.25 1.00 # Purchaser 0.20 0.24 0.15 1.00 brm (Enough ~ Gender + Country + mo (Frequency) + mo (Purchaser), family= bernoulli, prior= c ( set_prior ( "horseshoe(1, par_ratio=0.05)" )), control = list ( max_treedepth = 15 , adapt_delta= 0.95 ), chains= 30 , iter= 10000 , data= eric) # ...Population-Level Effects: # Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat # Intercept 1.85 0.25 1.52 2.54 10770 1.00 # GenderMale 0.00 0.04 -0.07 0.07 131457 1.00 # CountryAU -0.00 0.08 -0.10 0.08 113988 1.00 # CountryAZ 0.00 0.18 -0.10 0.12 95067 1.00 # CountryBE 0.01 0.18 -0.09 0.11 84197 1.00 # CountryBG 0.01 0.16 -0.08 0.13 62989 1.00 # CountryBO 0.00 0.16 -0.10 0.12 85775 1.00 # CountryBR 0.00 0.16 -0.09 0.11 80827 1.00 # CountryBS -0.09 0.53 -1.28 0.05 29590 1.00 # CountryCA 0.00 0.08 -0.08 0.10 118251 1.00 # CountryCH -0.02 0.26 -0.20 0.07 55249 1.00 # CountryCZ 0.01 0.18 -0.08 0.15 54813 1.00 # CountryDE 0.01 0.18 -0.07 0.16 72813 1.00 # CountryDK -0.00 0.11 -0.11 0.09 111953 1.00 # CountryEE 0.00 0.15 -0.10 0.11 109200 1.00 # CountryES 0.01 0.17 -0.09 0.12 59172 1.00 # CountryFI 0.00 0.15 -0.10 0.11 90646 1.00 # CountryFR -0.00 0.11 -0.11 0.09 115618 1.00 # CountryGB -0.00 0.06 -0.09 0.08 101507 1.00 # CountryGR 0.01 0.18 -0.08 0.13 33231 1.00 # CountryHK 0.01 0.15 -0.09 0.12 73429 1.00 # CountryHR -0.01 0.12 -0.13 0.08 98295 1.00 # CountryHU 0.01 0.17 -0.09 0.12 85208 1.00 # CountryID 0.00 0.10 -0.08 0.11 91982 1.00 # CountryIE -0.01 0.15 -0.15 0.07 55181 1.00 # CountryIL -0.04 0.28 -0.46 0.06 35464 1.00 # CountryIN -0.02 0.13 -0.20 0.06 69378 1.00 # CountryIR -0.02 0.25 -0.19 0.07 53285 1.00 # CountryIS -0.03 0.29 -0.23 0.07 44140 1.00 # CountryIT 0.00 0.16 -0.10 0.11 71643 1.00 # CountryJE 0.00 0.16 -0.09 0.11 65529 1.00 # CountryJM 0.00 0.11 -0.09 0.11 91020 1.00 # CountryJP 0.00 0.10 -0.09 0.09 118871 1.00 # CountryKE 0.01 0.16 -0.09 0.12 104434 1.00 # CountryKR 0.01 0.16 -0.09 0.12 93687 1.00 # CountryLB 0.01 0.17 -0.08 0.14 51394 1.00 # CountryLT 0.01 0.15 -0.09 0.12 97039 1.00 # CountryMD 0.00 0.16 -0.09 0.11 79148 1.00 # CountryMK -0.01 0.17 -0.16 0.07 14656 1.00 # CountryMM 0.00 0.15 -0.09 0.11 107831 1.00 # CountryMX 0.01 0.16 -0.08 0.12 84871 1.00 # CountryMY 0.01 0.19 -0.08 0.15 57119 1.00 # CountryNL 0.04 0.31 -0.05 0.48 44071 1.00 # CountryNO 0.00 0.10 -0.08 0.11 101486 1.00 # CountryNONE 0.03 0.28 -0.06 0.33 36810 1.00 # CountryPH -0.01 0.13 -0.20 0.06 63309 1.00 # CountryPL 0.00 0.10 -0.10 0.10 97421 1.00 # CountryPT 0.01 0.17 -0.09 0.12 69037 1.00 # CountryQA 0.01 0.16 -0.09 0.11 66602 1.00 # CountryRO 0.01 0.16 -0.09 0.11 82836 1.00 # CountryRU 0.00 0.15 -0.09 0.11 99649 1.00 # CountrySA 0.01 0.17 -0.09 0.11 70890 1.00 # CountrySE -0.00 0.08 -0.10 0.08 105770 1.00 # CountrySG 0.02 0.21 -0.06 0.20 48360 1.00 # CountryTR -0.01 0.17 -0.16 0.07 56410 1.00 # CountryTT 0.01 0.17 -0.08 0.14 68940 1.00 # CountryUA 0.01 0.16 -0.08 0.13 60610 1.00 # CountryUS -0.01 0.06 -0.14 0.05 73991 1.00 # CountryVE 0.01 0.17 -0.09 0.13 35998 1.00 # CountryVN 0.00 0.17 -0.09 0.11 54781 1.00 # CountryXK 0.00 0.16 -0.10 0.12 101146 1.00 # CountryZA 0.01 0.16 -0.08 0.13 59905 1.00 # moFrequency -0.17 0.40 -1.40 0.02 7944 1.00 # moPurchaser -0.01 0.08 -0.17 0.05 52232 1.00 # ... brm (Count ~ Gender + Country + mo (Frequency) + mo (Purchaser), family= cumulative, prior= c ( set_prior ( "horseshoe(1, par_ratio=0.05)" )), control = list ( max_treedepth = 15 , adapt_delta= 0.95 ), chains= 30 , iter= 10000 , data= eric) # ...Population-Level Effects: # Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat # Intercept[1] -0.91 0.31 -1.57 -0.33 103560 1.00 # Intercept[2] 1.07 0.32 0.42 1.68 105615 1.00 # Intercept[3] 2.68 0.35 1.98 3.37 113296 1.00 # GenderMale -0.13 0.17 -0.50 0.16 125937 1.00 # CountryAU -1.03 0.69 -2.48 0.08 124545 1.00 # CountryAZ -0.53 1.12 -3.33 1.18 158180 1.00 # CountryBE 0.11 0.81 -1.54 1.95 215427 1.00 # CountryBG -0.07 0.61 -1.45 1.20 224586 1.00 # CountryBO 0.12 0.80 -1.51 1.97 208130 1.00 # CountryBR -0.55 1.13 -3.36 1.16 158105 1.00 # CountryBS 0.27 0.91 -1.41 2.47 194178 1.00 # CountryCA 0.98 0.57 -0.04 2.09 82595 1.00 # CountryCH -0.51 1.10 -3.26 1.21 161968 1.00 # CountryCZ 0.37 0.74 -0.90 2.12 154331 1.00 # CountryDE 0.29 0.59 -0.73 1.68 156133 1.00 # CountryDK 0.62 0.73 -0.47 2.27 113453 1.00 # CountryEE -0.60 1.15 -3.44 1.12 155992 1.00 # CountryES -0.12 0.75 -1.84 1.38 219013 1.00 # CountryFI 1.27 1.56 -0.79 4.96 123980 1.00 # CountryFR -0.17 0.55 -1.46 0.88 202919 1.00 # CountryGB 0.17 0.28 -0.33 0.81 90166 1.00 # CountryGR 0.02 0.62 -1.32 1.37 214245 1.00 # CountryHK -0.96 1.24 -4.01 0.65 144472 1.00 # CountryHR 0.01 0.62 -1.34 1.37 222467 1.00 # CountryHU 0.08 0.80 -1.61 1.88 212960 1.00 # CountryID -1.81 0.92 -3.80 -0.14 138380 1.00 # Indonesia # CountryIE -0.92 1.23 -3.93 0.67 145126 1.00 # CountryIL 0.22 0.62 -0.93 1.67 181101 1.00 # CountryIN -2.25 1.38 -5.43 -0.09 138907 1.00 # India # CountryIR 0.22 0.84 -1.40 2.21 194840 1.00 # CountryIS 0.05 0.80 -1.64 1.84 210993 1.00 # CountryIT 0.17 0.82 -1.47 2.07 206471 1.00 # CountryJE 1.27 1.56 -0.79 4.96 123390 1.00 # CountryJM 0.76 0.64 -0.23 2.10 86077 1.00 # CountryJP -0.42 0.60 -1.82 0.52 156615 1.00 # CountryKE -0.56 0.82 -2.53 0.69 164055 1.00 # CountryKR 0.07 0.76 -1.54 1.76 219641 1.00 # CountryLB -0.32 0.65 -1.87 0.79 178249 1.00 # CountryLT 0.44 0.79 -0.86 2.31 157516 1.00 # CountryMD 0.77 1.12 -0.94 3.41 137492 1.00 # CountryMK -0.27 0.77 -2.13 1.12 199321 1.00 # CountryMM -0.46 1.08 -3.19 1.24 164795 1.00 # CountryMX 0.36 0.68 -0.78 1.96 158241 1.00 # CountryMY -1.91 1.39 -5.10 0.06 133497 1.00 # CountryNL 0.49 0.46 -0.22 1.46 90797 1.00 # CountryNO 1.22 0.69 -0.02 2.56 91436 1.00 # CountryNONE -0.66 0.60 -1.95 0.25 127296 1.00 # CountryPH -0.25 0.51 -1.44 0.63 177716 1.00 # CountryPL 0.09 0.45 -0.82 1.11 171907 1.00 # CountryPT 0.92 0.98 -0.52 3.07 118487 1.00 # CountryQA -0.46 1.09 -3.17 1.27 166932 1.00 # CountryRO 0.02 0.78 -1.70 1.71 216995 1.00 # CountryRU -0.60 1.15 -3.45 1.12 154146 1.00 # CountrySA -1.01 1.26 -4.07 0.61 142090 1.00 # CountrySE 0.59 0.53 -0.23 1.72 85612 1.00 # CountrySG -0.67 0.71 -2.29 0.37 143570 1.00 # CountryTR 0.14 0.67 -1.21 1.69 214560 1.00 # CountryTT 0.65 0.87 -0.69 2.63 121131 1.00 # CountryUA -0.59 0.84 -2.59 0.67 150001 1.00 # CountryUS 0.79 0.28 0.26 1.35 72791 1.00 # CountryVE 1.35 1.14 -0.35 3.72 101866 1.00 # CountryVN -0.65 1.18 -3.56 1.09 153337 1.00 # CountryXK -0.59 1.15 -3.44 1.12 150475 1.00 # CountryZA 0.01 0.62 -1.35 1.35 212747 1.00 # moFrequency -0.89 0.36 -1.61 -0.19 96417 1.00 # moPurchaser -0.06 0.26 -0.64 0.45 179817 1.00 Nothing of note emerges here, except perhaps a tendency for males to have fewer socks (albeit they appear to be content with fewer); there might be country-level effects as even the horseshoe regularization doesn’t pull them tightly to zero, but there is far too little data to be confident in what the effects might be. In the GS US survey data, there is only one country, of course, but in exchange an inferred age bracket is available: socks <- read.csv ( "https://www.gwern.net/docs/psychology/2019-01-20-gs-socks.csv" ) socks <- subset (socks, select= c ( "Question..1.Answer" , "Question..2.Answer" , "Question..3.Answer" , "Question..4.Answer" , "Gender" , "Age" )) socks <- socks[socks $ Question.. 3. Answer != "" ,] # rm NAs socks <- socks[socks $ Question.. 4. Answer != "" ,] # rm NAs socks $ Question.. 1. Answer <- socks $ Question.. 1. Answer == "Yes" socks $ Question.. 2. Answer <- as.ordered (socks $ Question.. 2. Answer) socks $ Question.. 3. Answer <- ordered (socks $ Question.. 3. Answer, levels= c ( "monthly" , "semi-annually" , "annually" , "less/never" )) socks $ Question.. 4. Answer <- ordered (socks $ Question.. 4. Answer, levels= c ( "me" , "spouse or significant other" , "relative" , "other" )) socks <- socks[socks $ Age != "Unknown" & socks $ Gender != "Unknown" ,] socksI <- with (socks, data.frame ( Enough= as.integer (Question.. 1. Answer), Count= as.integer (Question.. 2. Answer), Frequency= as.integer (Question.. 3. Answer), Purchaser= as.integer (Question.. 4. Answer), Age= as.integer (Age), Gender= as.integer (Gender == "Male" ))) skim (socksI) # Skim summary statistics # n obs: 114 # n variables: 6 # # ── Variable type:integer # variable missing complete n mean sd p0 p25 p50 p75 p100 hist # Age 0 114 114 3.92 1.57 1 3 4 5 6 ▃▂▁▇▅▁▇▆ # Count 0 114 114 2.34 0.94 1 2 2 3 4 ▃▁▇▁▁▅▁▂ # Enough 0 114 114 0.79 0.41 0 1 1 1 1 ▂▁▁▁▁▁▁▇ # Frequency 0 114 114 2.8 0.84 1 2 3 3 4 ▁▁▅▁▁▇▁▃ # Gender 0 114 114 0.72 0.45 0 0 1 1 1 ▃▁▁▁▁▁▁▇ # Purchaser 0 114 114 1.39 0.88 1 1 1 1 4 ▇▁▁▁▁▁▁▁ polychoric (socksI) # Polychoric correlations # Enogh Count Frqnc Prchs Age Gendr # Enough 1.00 # Count 0.19 1.00 # Frequency -0.21 0.12 1.00 # Purchaser -0.23 -0.24 -0.25 1.00 # Age 0.17 0.08 0.09 -0.20 1.00 # Gender -0.49 -0.14 0.11 0.55 0.26 1.00 brm (Enough ~ Gender + mo (Age) + mo (Frequency) + mo (Purchaser), family= "bernoulli" , data= socksI) # ...Population-Level Effects: # Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat # Intercept 2.42 1.13 0.28 4.70 2064 1.00 # Gender -1.81 0.91 -3.82 -0.27 2802 1.00 # moAge 1.08 0.84 -0.52 2.81 2342 1.00 # moFrequency -0.03 1.00 -1.87 2.10 2111 1.00 # moPurchaser -0.99 0.75 -2.50 0.45 3191 1.00 brm (Count ~ Gender + mo (Age) + mo (Frequency) + mo (Purchaser), family= "cumulative" , data= socksI) # ...Population-Level Effects: # Estimate Est.Error l-95% CI u-95% CI Eff.Sample Rhat # Intercept[1] -0.65 0.88 -2.21 1.20 2351 1.00 # Intercept[2] 1.45 0.90 -0.14 3.39 2295 1.00 # Intercept[3] 2.94 0.93 1.31 4.91 2392 1.00 # Gender 0.12 0.41 -0.72 0.91 5283 1.00 # moAge -0.49 0.57 -1.60 0.65 3793 1.00 # moFrequency 1.44 0.92 -0.19 3.34 2553 1.00 # moPurchaser 1.35 0.74 0.02 2.86 4586 1.00 There are possible age effects in the expected direction; older people appear to be better at managing sock levels. Curiously, there may be different gender effects in the two survey datasets: in the Jorgensen international survey, gender is largely inert (except for a correlation with Purchaser ) while in the US GS survey, gender correlates with everything and men appear much less likely to have enough socks (but to have more socks). Poking at the data, there appears to be another connection: in the US, men are more likely to do their own sock purchasing. I wonder if this reflects a different in sex roles, with women doing more clothing shopping in non-US countries and taking care of sock needs along the way? Christmas advice “What do you see when you look in the Mirror [of Erised]?”

“I? I see myself holding a pair of thick, woollen socks.”

Harry stared.

“One can never have enough socks”, said Dumbledore. “Another Christmas has come and gone and I didn’t get a single pair. People will insist on giving me books.” J.K. Rowling, Harry Potter and the Philosopher’s Stone Given that consistently >15% of respondents don’t have enough socks, and in the US, younger males are especially likely to not have enough socks, here’s some Christmas advice: if you don’t know what to buy them, why not buy them some really good socks? Socks make a great gift. Everyone will need replacement socks, sooner or later, and it seems lots of people don’t get them. Unlike the feared ‘ugly sweater from Grandma’ present, they aren’t on public display so if they’re ugly, it’s not too big a deal. Nor do they take up too much space, and can be used for more of the year. An annual gift of socks is about the optimal tempo, given the surveys about how often people lose socks or buy socks, and Christmas is an excellent Schelling point, since it’s already associated with socks. Finally, socks may be a cunning gift as they can be easily evaluated as superior, and so seem premium despite not costing all that much in absolute terms.

Who Moved My Sock? How had I run out of socks? Well, like the joke about going bankrupt, I did it one day at a time: with a sock quietly disappearing one day, and a sock being tossed out due to holes & thinning out another day… At no point did I ever deliberately try to economize on socks or go without socks or explicitly think that it wasn’t worth the bother of picking up some socks next time I was in a clothing store or doing an Amazon order—it just happened on its own.

The Importance Of The Unimportant In the case of socks, there is never a ‘Socknik moment’. There is only a slippery-slope/sorites—there’s no hard and fast line between enough and too-few socks, socks slowly wear out or lose mates, and if you have 20 and now have 19, well, that’s not a big deal, and then when you are down to 18, that’s not a big deal either why go shopping, and soon you’ll be down to 17… And if you don’t buy socks regularly as part of a clothes shopping trip, when will you? Eventually you’re wearing uncomfortable socks or being cold or being forced to do laundry runs early, without there ever being a clear ‘I need to buy some socks!’ trigger point. Even a habit like buying replacement socks once a year as part of spring cleaning would be enough, but one still needs to instill a habit. Some might object that this is overthinking socks, and one should never think about socks at all. This is short-sighted. If we were all perfectly rational and omniscient and possessed of infinite computing power, all our problems would already be solved and we would buy socks at the exact optimal moment as part of the grand plan; but we are not. Dealing with our bounded rationality is the central concern of all discussions of rationality & optimizing & biases. It may not seem important to think about socks at any particular moment, and socks are probably not the most pressing thing at this instant for me either, compared to tasks like ‘write an essay’ or ‘exercise’ or ‘answer emails’. But if it is better to wear socks than not, and one does not wish to go barefoot for the rest of one’s life, then it must be optimal at some moment to think about socks. Perhaps a few months from now when one’s ‘sockpile’ has worn down, during downtime, but there must be one. Similarly, one could scoff at all of the necessities of life like getting groceries, or filing a tax return, or getting life insurance: surely at that instant there is always something more important one could be working on doing, like getting a college degree or founding a startup? But this argument must have some flaw or by induction you would never do them and so you would starve to death while being audited by the IRS and your heirs are rendered homeless. For example, the value of these tasks increases over time: you don’t really need to do your taxes early before the deadline, but you do want to get it done by the deadline. With groceries, as long as you have enough to eat, it’s not much of a problem to be low on food—perhaps it reduces your variety a bit, but it’s not like you’ll starve, except if you run out of food in which case you will. And failure to get life insurance incurs a small loss each and every day (because of the risk of you dying that day and failing to provide for whatever you wanted life insurance for). Further, one’s life is a complex system: one’s house, one’s career, one’s computer, all of these are complex systems with interacting, cascading failures. All complex systems (“How Complex Systems Fail”, Cook 2000) operate in a constant state of low-grade failure, where minor errors must be regularly repaired in order to prevent a large-scale failure cascading through the whole system. When a steel furnace explodes, killing people, it doesn’t happen out of the blue, but reflects a long series of choices & gradually escalating issues & near-misses, and is a “normal accident”. When I lost weeks of time and money to a laptop & backup failure, it wasn’t because only one thing went wrong: it required at least 3 unusual failures simultaneously in my laptop & backup systems, any of which not happening would have prevented the full accident. Each slip may seem relatively minor and extraordinarily unlikely to have any serious consequences, but, like the “indifference of the indicator”, they add up over a lifetime and eventually a tail risk materializes. Chance disfavors the unprepared mind—time and chance happeneth to all, and indeed do many things come to pass. Because failures interact and multiply, they resemble a log-normal distribution: each individual factor can block the accident so the final damage of the outcome is the multiplication of the individual factors. The log-normal implies that a small systematic increase or decrease in each factor, analogous to being more careful & proactive in general about maintenance and risk, can cause a large difference in final outcome (Shockley 1957). One must expect the unexpected, and a failure to ‘sweat the small stuff’ means you are allowing brush to pile up in the forest: one match could set it ablaze. People who do not sweat the small stuff have a remarkable tendency to have ‘bad luck’ and somehow keep getting into trouble, much like the less intelligent suffer more ‘accidents’ or natural disasters have death tolls almost entirely determined by poverty—certainly, time & chance may happeneth to us all, but our preparations & reactions play an even greater role in determining how far things go. A lack of the bourgeoisie virtues is a lack of foresight, preparations, and reserves/insurance/slack. Consider how careless some people are in matters of everyday life. It’s not hard to see how such carelessness in, say, getting drunk and making rental payments can quickly escalate. ‘Yak Shaving’ as a Failure Cascade Seth Godin explains yak shaving as a story: “I want to wax the car today.” “Oops, the hose is still broken from the winter. I’ll need to buy a new one at Home Depot.” “But Home Depot is on the other side of the Tappan Zee bridge and getting there without my EZ Pass is miserable because of the tolls.” “But, wait! I could borrow my neighbor’s EZ Pass…” “Bob won’t lend me his EZ Pass until I return the mooshi pillow my son borrowed, though.” “And we haven’t returned it because some of the stuffing fell out and we need to get some yak hair to restuff it.” And the next thing you know, you’re at the zoo, shaving a yak, all so you can wax your car. Godin’s take-away is that yak shaving is misguided perfectionism: once one realizes one is yak shaving, one should decide “Don’t go to Home Depot for the hose. The minute you start walking down a path toward a yak shaving party, it’s worth making a compromise. Doing it well now is much better than doing it perfectly later.” I interpret yak shaving entirely differently. At least when I feel I am trapped in yak shaving, it more often reflects a failure cascade in the complex system I am currently part of: either mentally I have gotten trapped into a local minima and have failed to reflect periodically on what the best way is, or the system really is broken and once the yak is shaved, requires root cause analysis to find out how to fix the fundamental problems and how to prevent them from recurring. I see ‘yak-shaving’ as a description of a situation where you are nested so deep in subgoals that you’ve forgotten your original goal, at which point a good heuristic is to wake up and say “this is a lot of yak-shaving!” and think about what is going on that has led to an undesirable situation. Thinking about my own applications of the term, I think there are 3 different kinds of problems which can lead to yak-shaving: avoidance, lack of mindfulness, and cascading problems/system failures. you are procrastinating or being akratic or falling into perfectionism (closely related to procrastination), by deliberately overcomplicating something or trying to use fancy or shiny new techniques, which of course frequently lead to new subgoals because you aren’t familiar with them yet. This is fine sometimes (you have to learn those new techniques somewhen) or if it’s a kind of ‘structured procrastination’ (where the yak-shaving is itself valuable eg because it makes a neat blog post or useful software package), but often isn’t. The usual akrasia/procrastination equation stuff, except it’s being hidden under a gloss of superficial productivity. (“I can’t write my novel, I have to clean my desk which requires […solving 15 deeper nested issues…] which will take up all the rest of the day; I sure am a hard-working writer.”) By calling it yak-shaving, you admit you are faffing around and you then solve your problem the way you knew you should all along; or you can deal with why you are avoiding finishing, or whether you really want to do it at all. If you refuse to acknowledge the yak-shaving, then even if you ‘shave the yaks’ you’ll just find another way to overcomplicate things or a different thing to waste time on or switch to procrastinating on social media etc. you have been following a greedy strategy of taking the quickest option at each decision node; that you have now stacked up so many tasks to complete suggests that the greedy strategy has failed and you have fallen into a local pessima. Like with sunk costs, it’s time to stop being so mindless, step back, think about it more globally, and ask if there’s some better approach. Was there some entirely different strategy which seemed too expensive compared to your current path (which has actually turned out to be far more costly than predicted) and now looks cheap? Or are there any intermediate middle steps which are expensive but cut out a large number of other steps? Or perhaps all the paths are so costly that the top-level goal now no longer looks worth bothering with and you should drop all the existing tasks & stop shaving the yak entirely. Programmers are particularly susceptible to this because the line between useful automation and immensely complicated time-wasting tinkering is a fine one indeed. This can be common in programming where you can say, build up a Rube Goldberg collection of shell scripts and Emacs functions and manual edits to text because you wanted to avoid writing a SQL function (because it would take 20 minutes of consulting the SQL documentation to get it right); but by the time you’re consulting the Bash FAQ or resetting IFS variables to deal with a problem half an hour later, it’s good to wake up and ask ‘am I yak-shaving?’—and then you might realize that the data or problem has turned out to be sufficiently painful (eg lots of special characters or oddity in data formatting) that you can’t catch all the special cases and you would’ve been better off writing the SQL query in the first place. In Godin’s example, perhaps one should simply return the yak pillow and hope the neighbor won’t notice the missing stuffing, or they will prefer to simply have it back rather than wait for you to fix it whenever, or simply upset them a little; or order the hose on Amazon even if it costs $5 more, to get it done; or, pay the damn toll like anyone else; or finally, is waxing the car worthwhile at all (who notices)? Here ‘yak-shaving’ serves as a useful mental trigger which can break you out of the myopic problem-solving loop. This sort of yak-shaving is usually quite bad, and if you don’t break out of it soon enough, can lead to considerable exhaustion and waste of time, and lock you into bad long-term decisions. So it’s good to periodically ask, if you aren’t making progress on a problem of intrinsic interest to you, “so all this work, what’s it for anyway? If I were starting over from scratch—knowing what I do now—is this really how I would approach this problem?” what you are doing is the best way to solve the problem overall, it’s just that things have been going wrong and you’ve been running into continual problems, so you find yourself nested many layers deep dealing with the cascade of problems and documentation …all your (encrypted) backups are broken because you can’t get the most recent decryption key because your drive is corrupted because you were running the GPU 24/7 (to name a recent example of mine) so you’re in a LiveCD trying to mount the drive trying passwords trying… In this case, in addition to simply shaving the yak, you need to do root-cause analysis—you are experiencing what might be called muri—and in addition to figuring out how to solve each proximate problem on the way, figure out why they happened & how to prevent them in the future. In programming, this frequently entails filing bug reports & document patches, formalizing your recovery methods as scripts or programs, adding tests or redundancy or upgrading hardware, and writing post-mortems. So Godin’s interpretation of a stack of nested related problems here is simply a form of this. But here, simply yak shaving may solve the fur problem & allow popping, but it’s not enough. It’s not enough to simply close those open loops, or have a system for recording open loops. Root-cause analysis is needed. Why did the yak fur fall out of the pillow in the first place and how can it be prevented ever again? Why didn’t he have his EZ Pass in the first place? Why wasn’t the hose put on the weekly shopping list (there is a shopping list right?) and replaced long before? And so on. Without attacking problems at the root, you might as well buy a seasonal pass to the zoo, because you are merely applying bandaids to a complex system failing, and if you don’t do any root-cause fixes, eventually your problems will seriously stack up and you’ll find yourself hit by a so-called ‘perfect storm’ (actually perfectly foreseeable & inevitable) and then you’ll really be sorry. So, ‘yak-shaving’ is a useful heuristic for keeping planning stacks nested not too deeply by periodically asking whether one is falling prey to one of those 3 failure modes, and need to break out of the yak-shaving by an appropriate countermeasure of either: interrogating the reasons for the akrasia; finding a better approach; or prioritizing fixing the root-causes of needing to yak-shave (rather than focusing on the yak-shaving).

The Ur Cognitive Bias “I started eating with them [the chemists] for a while. And I started asking, ‘What are the important problems of your field?’ And after a week or so, ‘What important problems are you working on?’ And after some more time I came in one day and said, ‘If what you are doing is not important, and if you don’t think it is going to lead to something important, why are you at Bell Labs working on it?’ I wasn’t welcomed after that; I had to find somebody else to eat with!…In the fall, Dave McCall stopped me in the hall and said, ‘Hamming, that remark of yours got underneath my skin. I thought about it all summer, i.e. what were the important problems in my field. I haven’t changed my research’, he says, ‘but I think it was well worthwhile.’ And I said, ‘Thank you Dave’, and went on. I noticed a couple of months later he was made the head of the department. I noticed the other day he was a Member of the National Academy of Engineering. I noticed he has succeeded. I have never heard the names of any of the other fellows at that table mentioned in science and scientific circles.” Richard Hamming, “You and Your Research” “A mule who has carried a pack for ten campaigns under Prince Eugene will be no better a tactician for it, and it must be confessed, to the disgrace of humanity, that many men grow old in an otherwise respectable profession without making any greater progress than this mule.” Frederick the Great, “Thoughts on Tactics” One problem here is that the unimportant becomes important, slowly and subtly. There is no IRS clock ticking on one’s wall, any more than there is a realtime display of one’s sockpile with defined red danger zones upon which one orders new socks. For many things, there is never any hard deadline or scheduled event or reminder which would bring a need to mind. So necessary things suffer from what a computer scientist might call starvation: when a background task, like running a backup, which has a low priority (eg a backup can wait a few minutes without much risk), is continuously pushed out by higher priority tasks and never gets to run; while it may not have been urgent that it run immediately, it is urgent that it run eventually. (Anyone who disagrees about backups not being important is free to implement that advice and see how it works for them in the long run.) Starvation reflects bad planning: the priorities of starving tasks are not increased over time to reflect their priority, or starving tasks may not be considered at all by a myopic planner. And for humans, ‘out of sight is out of mind’, so myopia is easy. Many human cognitive biases can be considered as reflections of a single ur-cognitive bias (Stanovich 2010, Decision Making and Rationality in the Modern World), a failure to activate difficult, deliberate, explicit System II thinking when appropriate, ‘waking up’ from the usual fast frugal System I thinking, perhaps from time to time just to re-evaluate things. “Humans are not automatically strategic.” Instead, System I is always invoked, regardless of System II is needed, and the fast frugal cheap reflexive thinking of System I takes over. When System I runs unimpeded, work tends to degenerate into what Google SRE terms “toil”; Beyer et al 2016: …toil is the kind of work tied to running a production service that tends to be: Manual

Repetitive

Automatable and not requiring human judgment

Interrupt-driven and reactive

Of no enduring value One works hard, but that a few bucks will get you a cup of coffee. Eliminating toil requires stepping back to take an outside view and possibly re-engineer things. Of course System II can’t run all the time, any more than we can ponder every day whether today we should re-engineer our sock-buying system or buy more socks. We hardly ever do—but that’s not quite the same as never. It needs to run occasionally to check the fundamentals, to look for tasks starving in the background for lack of saliency, and to reflect on what is being done that ought not to be done at all, and consider entirely new alternatives. I think apparent instances of ‘sunk cost’ are better described as thoughtlessness. To give an example: when chess or Go players continue throwing pieces into a doomed position, is that because they explicitly realize it is doomed but feel they must persevere anyway, or is it due to the fact that chess amateurs commit more confirmation bias than masters (Cowley & Byrne 2004) and don’t realize that the positions are in fact irretrievable? When one engages in spring-cleaning, one may wind up throwing or giving away a great many things which one has owned for months or years but had not disposed of before; is this an instance of sunk cost where you over-valued them simply because you have invested into holding onto them for X months, an instance of endowment effect where it is more valuable because it’s yours (a bias which doesn’t change with additional investment)—or is this an instance of you simply never before devoting a few seconds to pondering whether you genuinely liked that checkered scarf & if you haven’t worn it in years how likely are you to ever wear it again? When we see an apparent sunk cost, might we not be seeing a well-developed habit which made sense when it was developed and perhaps has simply never been critically re-examined in the light of current circumstances? Habits are invaluable, but they are also invisible and indurate except at times of crisis where one is re-prioritizing things. Even in corporations, where sunk cost thinking is at its worst, many of the instances (eg the new CEO who radically overhauls the company by cutting products & divisions & employees) are often simply executing changes that the rest of the company knows are long overdue but could never quite rise to a priority without the Schelling point of a new CEO brought on to shake things up. (Or indeed, in general: “never let a crisis go to waste.”) Few people persevere in a mistaken choice of college degree because they truly value that they have obtained irrationally more solely because they have already spent a lot of money on it, which is the classic ‘sunk cost bias’. Usually, it’s more that they are so busy with classes & student life & projects & hobbies that they don’t think about it, continuing with the original plan is the path of least reflection, the occasional stray thoughts ‘maybe this is the wrong path’ are too painful to pursue more than briefly, and they have not sat down and pondered even 5 minutes the costs/benefits or how well it’s been going and seriously opened up internally to the possibility of quitting. One continues because one continues. Nor is there necessarily any point at which they will be forced to consider this before graduation, as college systems are geared to usher one from enrollment to graduation, and one doesn’t have to make an extraordinary effort at any point to continue on that path. (One does for graduate school, which is fortunate, considering how much student debt that can entail, but then the same dynamic will kick in once one is in grad school.) Or at what point does a commuter realize that the tradeoff isn’t that great? Any doubts may simply starve for lack of thought to feed them, until one day, one suddenly ‘wakes up’.

Finding New Socks “It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle—they are strictly limited in number, they require fresh horses, and must only be made at decisive moments…It is interesting to note how important for the development of science a modest-looking symbol may be.” Alfred North Whitehead, An Introduction to Mathematics (1911) Many of the best anti-bias mechanisms or ‘life hacks’ or ‘habits’ are about strategic application of our limited System II resources, often employing external systems to fight starvation. The simplest wake-up mechanism is having a habit to occasionally review the past, like reviewing one’s ledgers at the end of every month. The humble poka-yoke checklist, for example; or pointing and calling, reminder or note-taking software, spreadsheets/double-entry ledgers, emails with timers, ‘lint’ tools, many ‘life hacks’ in general… I am heavily reliant on my calendar software to remind me to check in on various papers or people, do exports/backups which can’t be easily automated, update pages, and re-evaluate things periodically; in writing things, I have found it worthwhile to develop my own checklist and am constantly expanding my writing linter, markdown-lint & my site build/sync script, with new errors to watch out for. Such systems efficiently intervene only at critical moments, and systematically cover available options to overcome System I inertia/forgetting: a checklist reminds one of every necessary step, while poka-yoke error-proofing remove error cases or at least add them to checklists, and pointing-and-calling is a physical implementation of the mental process of checklisting, while time-based tools like calendars can be scheduled in advance to fire only at the critical moment to save all the cognition from now to then. And sufficiently reliable automated tools can go one better and only interrupt one, waking up System II, only if there is actually an error which needs to be fixed. Exploration Underuse of System II particularly manifests as over-exploitation/under-exploration, where large potential improvements are foregone because of a lack of a habit or other systematic factor which would trigger exploration. (By exploration, I don’t mean spending hours reading reviews on Amazon or on social media, or reading yet another book on a topic, which is largely about feeding idle curiosity & is information super-stimuli, but actual experimentation and trying.) One way to measure under-exploration is noting instances where exogenous randomization or destruction of the status quo option leads to permanent changes or net efficiency gains after the shock is removed, indicating learning or that the status quo was suboptimal all along . (One area where under-exploration is especially rife is in randomized experiments in science, where what everyone ‘knows’ based on correlation often turns out to be false, yet despite the large implied regrets, it is still held to be ‘unethical’ to run more randomized experiments.) Harvard economist Sendhil Mullainathan asks “Why Trying New Things Is So Hard to Do”, putting it well with a familiar example, grocery shopping: I drink a lot of Diet Coke: two liters a day, almost six cans’ worth. I’m not proud of the habit, but I really like the taste of Diet Coke. As a frugal economist, I’m well aware that switching to a generic brand would save me money, not just once but daily, for weeks and years to come. Yet I only drink Diet Coke. I’ve never even sampled generic soda. Why not? I’ve certainly thought about it. And I tell myself that the dollars involved are inconsequential, really, that I’m happy with what I’m already drinking and that I can afford to be passive about this little extravagance. Yet I’m clearly making an error, one that reveals a deeper decision-making bias whose cumulative cost is sizable: Like most people, I conduct relatively few experiments in my personal life, in both small and big things. This is a pity because experimentation can produce outsize rewards. For example, I wouldn’t be risking much by trying a generic soda, and if I liked it enough to switch, the payout could be big: All my future sodas would be cheaper. When the same choice is made over and over again, the downside of trying something different is limited and fixed—that one soda is unappealing—while the potential gains are disproportionately large. One study estimated that 47% of human behaviors are of this habitual variety. Yet many people persist in buying branded products even when equivalent generics are available. These choices are noteworthy for drugs, when generics and branded options are chemically equivalent. Why continue to buy a name-brand aspirin when the same chemical compound sits nearby at a cheaper price? Grocery shopping is a great example because it is something everyone does, often, which represents a substantial portion of personal budgets, with clear & unambiguous costs, where the difficulty of experimentation is so minimal that it feels weird to even call activities like ‘compare prices & try different foods’ by a term as fancy as “experimentation”, where the benefits of learning are large & can last decades. (Aldi isn’t going to suddenly become more expensive than Whole Foods, and rank-ordering of prices remains relatively constant—that’s the whole point of having brands, after all). Yet, we still don’t. And the benefits are large. As Mullainathan notes, while the cost in a single instance may be small, the total loss (“regret���) is much larger because it is repeated across a lifetime. If you choose to drink Diet Coke and it costs +$0.25/can (let’s say the generic costs $0.75/can and Diet Coke $1/can, and if you dislike the generic you’ll throw it away), you haven’t lost $0.25, you have lost much more than that, because it is not a one-off decision about a single drink—you are buying information for all your future choices, and the “Value of Information” of the experiment is far higher than the trivial upfront cost. Suppose you drink 1 coke a day. The difference is $0.25/day, or, $91 a year. The gain from switching does not stop after a year, it goes on indefinitely, so at a fairly psychologically normal discount rate of 5%, the NPV of the gain is $1871 . In order for your experiment to not cover $0.75 and not be profitable, you would have to assign a prior probability of <0.013% to the generic being as good (or better!) and you switching and reaping a gain of $1871. Which would be crazy because as Mullainathan also notes, everyone knows that often the generic version is fine, and indeed, frequently is literally the same as the brand name, either because they use the same manufacturers or because the seller is implementing price discrimination. And let’s not pretend that this is any great heroic effort, requiring advanced statistics or long-term experimentation or blinding. It takes a second to grab the generic soda from off the shelf next to the Diet Coke, and a few seconds later in the kitchen to try them side by side; are they about the same? Then great! You can enjoy the savings from buying generic thenceforth, otherwise, toss the generic soda; either way, there’s no need to think about it further. This applies as well to any other staples you might buy. Is King Arthur flour really worth paying twice as much as Gold Medal flour? (Not that I’ve ever noticed in my baking.) Perhaps if you tried all 6 kinds of applesauce you’d find one of the cheaper ones tastes better than the expensive ones. (I did. It doesn’t add sweetener, and I think most applesauces are oversweetened. I want it to taste like apples, not corn syrup.) Is ‘scrap bacon’ terrible in some way that makes costing half as much as regular bacon a lie? (Nope: tastes as delicious to me, and I can buy twice as much.) Can you tell the difference between the expensive imported Finnish/Irish butter and the generic Walmart butter? (I can… eating it straight while concentrating carefully. But I can’t on bread or anywhere I would use said butter.) And is Smucker’s “natural peanut butter” any better than your ordinary Jiff or generic peanut butter? (Trick question—I actually think it tastes much better than regular peanut butter & that’s what I buy. But, I only know this because I tried them all; otherwise, I wouldn’t’ve bought something as weird-looking as peanut butter which still has its original peanut oil.) Personally, I make a point of, whenever trying something new like food, to buy 1 of everything, to the extent possible, and simply trying them all. I am no longer surprised when I find that the generic is as good or better at a third or less the cost (how on earth do brands maintain their profits when it’s so easy to compare?), or that I prefer something I didn’t expect to prefer. (Particularly in tea this has paid off in learning about strange things like twig tea.) I think it’s crazy how people will buy the same thing forever and overspend on brand names, and, while they’re at it, never try another grocery store (switching to Walmart saved me >10%, and then switching to Aldi another >10%), and pass up bulk savings to buy the smallest possible quantities. And then they complain their monthly grocery bill is $400 and they wonder where all the money goes… It is wasteful to not be wasteful. If we so often under-explore in groceries, we surely under-explore elsewhere too. What can help ameliorate this is deliberate forcing of exploration. With groceries, my rule of buying multiples the first time is a simple easily-implemented heuristic to force exploration of grocery options. With music, I try to avoid my tastes ‘freezing’ into whatever I listened to as a teenager by listening to large musical dumps rather than recommendations (eg dōjinshi convention compilations), and avoiding the bandwagon effects of popular media. With research, systematic reading of all papers on a given topic rather than the most-cited ones can lead to many interesting but still obscure papers. We can try to compensate for our lack of mindfulness in other areas too. With socks, my new heuristic is expand my annual photographic inventory of my personal possessions (making a record of everything I own in case of disaster) to include clothes too ; in considering my clothes, I expect that I will notice when I get low on socks—or any other kind of clothing—and can take action before too many years pass and my sockpile becomes inadequate. I will surely discover other inadequacies in the future, but, if I am mindful of my limits, fewer and fewer, and they will get less in the way of more important things.