The noise factors that we studied are (see panel a): 1. Pre-encoding noise (pre): Sensory transduction noise before value inference is computed (for example, retinal noise). 2. Efficient Coding noise (EC): Noise resulting from value inference via sampling. This part of noise should be directly affected by presentation time of the food images, which determines the number of effective samples (for example, from memory) that can be drawn and therefore the noisiness of value representations. 3. Post-encoding noise (post): Any form of downstream noise that is not related to value inference per se, for example, motor/muscle noise. 4. Lapse rates (lapse): Quantifies the rate of random decisions due to distraction/lapses during the performance of the valuation task. To run this analysis, we assumed that for ratings with long exposure time, the respective source of noise was nearly zero and evaluated how ‘short exposure times’ affect the respective noise levels that potentially explain the observed ratings. The analysis shows that the models that best explain the data are those that incorporate efficient-coding noise (see Supplementary Table 1). We performed the factorial model comparison by generating the likelihoods that the observed subjective value estimations under time pressure are generated by a given generative model (while appropriately penalizing for model complexity). We formally compare the different models via a log factor likelihood ratio approach (LFLR) that quantifies the degree of belief in each factor (Van Horn, 2003; Shen and Ma, 2018). In brief, we find the marginal likelihood that a factor F is present by marginalizing over all models M in the model space \(L\left( {F_{{\mathrm{present}}}} \right) \approx \mathop {\sum}\limits_M {p\left( {{\mathrm{data|}}M} \right)\left( {M|F_{{\mathrm{present}}}} \right),}\) while assuming that all models are equally probable. One can analogously find the marginal likelihood of the factor’s absence and then compute the LFLR based via \({\mathrm{LFLR}}_{{\mathrm{AIC/BIC}}}(F) \equiv \log \frac{{p(data|F_{{\mathrm{present}}})}}{{p(data|F_{{\mathrm{absent}}})}}.\) We approximated the marginal log likelihood of a given model by -0.5 the AIC or BIC of that model. We conducted this analysis independently for each participant. Panel b shows the LRLR results for all noise factors averaged across participants (n = 24) using both AIC (left) and BIC (right) to estimate the likelihood of the data given the fitted parameters. The results clearly indicate that the internal noise of the efficient-coding (EC) model is the only factor significantly explaining the data. Horizontal dashed lines represent the levels of evidence for a given LFLR. The average LFLRs of the EC factor are >9.6, which corresponds to a Bayes factor BF>100. This provides overwhelming evidence for the factor being relevant (Jeffreys, 1961). No other factor crosses the moderate evidence line. Error bars in this panel indicate s.e.m. Panel c shows the LFLRs of the EC factor for each participant. This analysis reveals a positive LFLRs for the large majority (21 out of 24) of our participants. These results provide compelling evidence that manipulation of time exposure (when valuating food items) affects internal noise via efficient encoding. Van Horn, K.S. (2003). Constructing a logic of plausible inference: a guide to Cox’s theorem. Int. J. Approx. Reason. 34, 3–24. Jeffreys, H. (1961). Theory of probability (Oxford: Clarendon Press). Shen, S., and Ma, W.J. (2018). Variable precision in visual perception. Preprint at bioRxiv 153650.