Below are the major problems with the 2020 process. Now that the final public meetings of the Dietary Guidelines Advisory Committee (DGAC) have concluded, the public will receive no additional information until the committee submits its draft report, due May 11.

The 2020 DGA Process Relies on Flawed, Weak Science

Here’s what we’ve learned: Despite the NASEM recommendations and despite thousands of public comments on key issues, the DGA process is still:

non-transparent in important ways,

excluding vast quantities of rigorous science on key topics,

using out-dated science,

lacking any up-to-date method for reviewing the science.

Primary problems with the 2020 DGA process

At least a dozen USDA reviews are based on outdated science

By law, the DGA must reflect the “scientific and medical knowledge which is current at the time the report is prepared,” yet the 2020 process does not meet that standard. At the DGAC meeting in January, committee member Katherine Dewey stated that the 13 reviews undertaken for the “B-24” population (birth-through-24-months) had looked at the scientific literature systematically only through 2016. For studies conducted after 2016, Dewey said the committee had done “an informal search to identify new evidence that has emerged since 2016” but “did not locate any studies that would have changed [their] conclusions.” Dewey did seem concerned, however, that these 13 reviews had perhaps missed some of the science from the past few years. She said,

“We would like to ask the public to please submit public comments if you know of any articles published since 2016 that meet the inclusion criteria and would also significantly affect these conclusions…we do appreciate any comment that the public would like to provide.”

It is, of course, completely unscientific to rely upon random submissions from the public for studies. Thus, the DGA’s B-24 reviews are already out of date before even being published.

Committee Relying on 2015 Reviews Deemed “Unsystematic” by the National Academies of Sciences, Engineering, and Medicine

For a number of reviews, the Committee used, as its starting point, the 2015 DGA reviews—even though NASEM concluded that this earlier, from 2015 and 2010, work was “non-systematic” and therefore not reliable.[1] The 2020 reviews using this earlier work include, at a minimum, the reviews on saturated fats and those on the Dietary Patterns. Given the basic flaws in this foundational evidence, the 2020 reviews cannot therefore be considered trustworthy.

Committee Issues “Strong” Recommendations Based on Weak Science

In at least one instance, the DGAC issued a “strong” recommendation based exclusively on a weak type of science, called epidemiology. The question was the following:

“What is the relationship between dietary patterns consumed and all-cause mortality?”

﻿The committee considered the evidence to be “strong” that the USDA dietary patterns (Mediterranean, “US-Style,” and Vegetarian) could reduce all-cause mortality, despite the fact that not a single experiment (trial) was cited to support this claim. Instead, the committee cited exclusively observational—or epidemiological—studies, which can only show associations. Epidemiological data are useful for generating hypotheses but in the field of nutrition, have never reliably been able to ‘prove’ causality. The leap to assuming causation can only be made rarely, when certain standards, called the “Bradford Hill Criteria” are satisfied[2]. The number one criteria is the size of the effect, or “strength of association.” In fact, Dr. Hill stated that this criteria was “First upon my list,” since it was the most important.

When the strength of association is large, such as the 20-30-times greater risk of dying from lung cancer seen among heavy smokers, compared to never-smokers, causality can be considered, provided the rest of the Bradford Hill criteria are also satisfied. However, in nutritional epidemiology, this effect size rarely exceeds 2. Such a small number cannot be considered reliable (due to lack of precision from food-frequency questionnaires and residual confounding, among other things).

By contrast, DGAC committee members found this weak evidence convincing, because it was so “consistent.” However, “consistency” is Bradford Hill’s second criteria. Without strong effects, consistency is not enough to assume causation. Indeed, this type of consistency could quite simply reflect bias in the field—as so many governments worldwide have followed the U.S. in adopting the same type of diet, emphasizing fruits, vegetables, whole grains, lean meat, low-fat dairy, nuts and seeds; And researchers relying on government grants have every incentive of finding in favor of the diet promoted by their funders.

Moreover, committee members noted numerous problems regarding inconsistencies among the 152 epidemiological studies they were lumping together, including some four-to-fifteen different methods for analyzing the data (0:22:58 – 0:25:59) and “different definitions of food and beverages” among the studies. Analyses have found such dramatic heterogeneity among definitions of a dietary pattern ultimately mean that such studies cannot be combined with “any degree of reliability.”

Findings from nutritional epidemiology have a track record of being incorrect. When properly tested in clinical trials, they are confirmed only 0-20% of the time. These are of course very low odds on which to bet the public health.