As President Joe Biden signs the largest climate package in American history, some leftists are bothered by policymakers’ hesitance to create a tax on meat.
Democratic lawmakers have long claimed that excrement produced by livestock and poultry farming — “farting cows,” as Rep. Alexandria Ocasio-Cortez (D-NY) once put it — have an outsized impact on global temperatures. Discussion about crafting new disincentives for meat consumption resurfaced as the Inflation Reduction Act advanced through Congress after months of gridlock.
Vox reporter Kenny Torrella, for instance, noted that the Inflation Reduction Act devotes 5% of its funds toward “changing farming practices” while entirely ignoring meat and dairy production — the sector’s “biggest climate culprit.”
“Even though the money to cut emissions from agriculture is misplaced, the strategy — hand out money to do the right thing rather than penalize polluters for doing the wrong thing — is politically smart, and in keeping with the bill’s carrot rather than stick approach to energy,” Torrella wrote. “Just like the environmental movement had for decades, the effort to shift our meat-centric food system to a more plant-based one has historically focused on the stick approach: suing farms for pollution, banning the cage confinement of hens and pigs, and even floating the concept of a tax on meat consumption.”
Biden was able to advance many of his policy goals through the $740 billion law, which includes $369 billion in climate spending. The package likewise incorporates billions in new taxes that the federal government will reap from businesses and the middle class with a twice-as-large Internal Revenue Service.
Though Torrella admitted that “handing out carrots in Congress might be a more politically effective path to reforming the factory farming industry, the emissions that it spews, and the suffering it creates,” he personally supports a meat tax, claiming that the approach “has a lot of merit” and could “build public support for a life and death issue that is too often ignored.”And at WUWT, Why Red Meat Negative Health Claims are False
How large could such a tax be? According to researchers Cameron Hepburn and Franziska Funke, as high as 56% for beef, 25% for poultry, and 19% for lamb and pork — costs that are necessary to “reflect the environmental costs of their production.”
Introduction
Kip Hansen’s recent WUWT article was dead-on about nonsense behind meat being a problem for climate change. The World Economic Forum (WEF), assisted by academics, wants you to believe that meat is unhealthy compared to soy, tofu, insect and fungus protein diets.
The WEF asserts that in the future “…meat will be a special treat, not a staple for the good of the environment and our health.” Academics claim that eating red meat causes mortality, numerous types of cancer (colorectal, breast), Type 2 diabetes, and the list goes on. Does this make sense?
There is a saying… math is hard. Well, as will be shown, statistics appears to be even harder for academic food researchers. A look inside the statistical workings of food research (nutritional epidemiology) is a way to show this and to address doubtful red meat−negative health claims.
Background
Many food claims – beneficial or harmful – are made based on observational study of large groups of people called cohorts. These cohorts are given a food frequency questionnaire, FFQ. A FFQ asks questions about different types and portion sizes of foods consumed. Years later food researchers ask about their health conditions.
They then perform statistical analysis of food−disease associations with the data collected. Surprising food−disease associations end up as published research claims. But are these claims true?
Unhealthy red meat claims merit special attention given the WEF’s fixation on it. Kip Hansen’s WUWT article pointed out an evaluation of red meat FFQ studies completed by the Bradley Johnston research group in 2019. It was an international collaboration examining red meat consumption and 30 different health outcomes.
The Johnston research group reviewed published literature, selected 105 FFQ studies, analyzed them and presented their findings in the journal Annals of Internal Medicine. They took a position opposite to the WEF – studies implicating red meat were unreliable. Their findings created a firestorm among food researchers, who are mostly academics. More about that later.
Analysis
Statistically confirming the same claim in another study is a cornerstone of science. This is called replication. Given the potential importance of the Johnston study, it was recently independently evaluated in a National Association of Scholars report.
In the report, 15 of the 105 FFQ studies were randomly selected and subjected to counting of specific details. This included counting number of food categories, number of health outcomes and number of adjustment factors in each of the 15 studies.
Food researchers use various techniques to manipulate FFQ data they collect. Researcher flexibility allows food categories from FFQs to be analyzed and presented in several ways. This includes individual foods, food groups, nutrient indexes or food-group-specific nutrient indexes. It was found that there were from 3 to 51 (median of 15) food categories used in the 15 studies.
The number of health outcomes ranged from just 1 to 32 (median of 3) in the 15 studies. Adjustment factors can modify a food−disease association. Nutrition researchers almost always include these factors in their analysis. These factors ranged from 3 to 17 (median of 9) in the 15 studies.
With these counts, the analysis search space can be estimated. This is the number of possible food−disease associations tested in a FFQ study. It is estimated as estimated as the ‘number of food categories’ ´ ‘number of health outcomes’ ´ ‘2 raised to the power of the number of adjustment factors’.
The typical (median) analysis search space estimated in the 15 studies was over 20,000. A large analysis search space means many possible associations can be tested. Food researchers can then search thru their results and select and only report surprising results, but also most likely false ones as we now show.
Now the elephant in the room… many of these types of analyses are likely performed by researchers with an inadequate understanding of statistical methods.
A p-value is a number calculated from a statistical test. It describes how likely (the probability) you are to have found a surprising result. It is a number between 0 and 1. The smaller the number the more surprise (the greater the probability).
The normal threshold for statistical significance for most science disciplines is a p-value of less than 0.05. Researchers can claim a surprising result if the p-value in a statistical test is less than 0.05.
However, a false (chance) finding may occur about 5% of the time when multiple tests are performed on the same set of data using a threshold of 0.05. Five percent of 20,000 possible associations tested may lead to 1,000 false findings mistaken as true results in a study.
The practice of performing many, many tests on a data set is called multiple testing. Say 20,000 associations are tested on a red meat FFQ study data set. Normally only several dozen results from all these tests would eventually be presented in a published study.
Of course, some of the results would be surprising. For example, a wild claim that red meat may lead to complications associated with erectile dysfunction. Otherwise, their study might not be accepted for publication.
Given these many tests with 1,000 possible false findings and only several dozen results presented, how does one tell whether a result claiming red meat leads to erectile dysfunction complications is true or just a false finding?
Without having access to the original data set to check or confirm a claim, you can’t! The Johnston research group was right to call out red meat FFQ studies as unreliable.
Cue the firestorm. Nutrition thought leaders – from Harvard – badgered the main editor of Annals of Internal Medicine to withdraw Johnson’s paper before it even appeared in print. The editor held firm. The food research mob did not prevail.
Implications
Too many nutrition thought leaders, mostly academics, take a position that multiple testing is not a problem in food research. They teach it is not a problem. They are wrong, it is a big problem.
No problem for them, but massive disinformation problems for everyone else when false findings are claimed as true results. John Ioannidis from Stanford and others have called out multiple testing as one of the greatest contributors to false published research claims.
FFQ studies using multiple testing and claiming red meat is unhealthy are largely academic exercises in statistical flimflamming. Red meat is not unhealthy. It is belief of deceptive statistical practices and false claims from academic food researchers that are unhealthy.
There are over 50,000 food−disease studies published since the FFQ was introduced in the mid-1980s. Essentially all these studies involve multiple testing and are very likely false.
The problem of multiple testing is one of the first things impressed on students in statistics. It is also one of the first things either forgotten or ignored in the rush to publish "hot" new results.
The Wombat has Rule 5 Sunday: IJN Musashi out on time and under budget.
No comments:
Post a Comment