Wednesday, January 22, 2014

Putting "Relative Risk" in perspective - Eades

Absolute risk versus relative risk: Why you need to know the difference


Six years ago, the airwaves were alive with Lipitor ads. Lipitor, a statin and the largest selling drug in the world at that time, was being challenged by other less expensive statins that had gone off patent, so Pfizer, the manufacturer, was blanketing the media in an effort to keep sales humming.

Pfizer had a series of ads featuring Dr. Robert Jarvik, one of the developers of the first artificial heart. These ads were a first in that a real doctor had never appeared in an ad touting a drug. As I posted at the time, Pfizer and Dr. Jarvik got into some trouble because they all played a little fast and loose with the truth about Dr. Jarvik’s credentials.

Jarvik Lipitor AD 36 Absolute risk versus relative risk: Why you need to know the difference

Aside from the misrepresentation by both Pfizer and Jarvik, one of the claims many of these ads made was that by taking Lipitor you could reduce your risk of heart disease by 36 percent. Sounds pretty good. I would like to reduce my risk of heart disease by 36 percent as did millions of people who went on the drug.

These ads weren’t technically misleading, but they didn’t tell the whole truth, because the 36 percent reduction in heart attack risk was what’s called a relative risk reduction.

Relative Risk
Relative risk is always stated as a percentage. Let’s say we do a study in which we randomize 200 subjects in two groups of 100. One group (the study group) takes a drug, and the other (the control group) takes a sugar pill. We keep the subjects in the two groups on their pill regimen and wait for, say, ten years to see what happens. After ten years, we find that 90 people in the group taking the sugar pill died while only 60 people taking the drug died.

We can then do the following calculation. 90 − 60 = 30. 30 divided by 90 = 0.33. Converting 0.33 to a percentage = 33 percent. Which is the relative risk. If this were an actual study, you could say people taking the drug reduced their risk of dying by 33 percent.

Would this be important? Absolutely. In this case as least. Why? Because 150 people out of 200 died. This means most of the people in the study died, so a 33 percent reduction in risk is huge. I would be all over this drug in a heartbeat.

So what’s the problem, then, with the 36 percent reduction in risk found in the Lipitor study? And why shouldn’t people be begging to go on Lipitor?

Because they need to know the absolute risk before the relative risk becomes important.

Absolute Risk
The absolute risk is simply the total risk for whatever is being studied. In our made up example above, the study end point we were looking for was death. In that example, 150 out of 200 died. So if you were a subject in that study, your absolute risk of dying would be 150 divided by 200 or 75 percent, which is very high.

If your absolute risk of crashing during a commercial airline flight were 75 percent, you would never fly. But if you absolutely had to fly, and one airline offered a 33 percent reduction in risk of crashing (relative risk), you would be stupid not to fly that airline.

To make sense out of relative risk, you’ve got to know the absolute risk.

Let’s go back to our Lipitor claim.

The 36 percent relative risk reduction figure came from a study published in Drugs titled Prevention of coronary and stroke events with atorvastatin in hypertensive patients who have average or lower-than-average cholesterol concentrations, in the Anglo-Scandinavian Cardiac Outcomes Trial–Lipid Lowering Arm (ASCOT-LLA): a multicentre randomised controlled trial.

This study was a subset of a larger study in which about 20,000 subjects with high blood pressure and three other cardiovascular risk factors were randomized into two groups, the subjects in each of which got one of two blood pressure lowering medications. From this larger group, a little over 10,000 subjects were found who had cholesterol levels at or below 250 mg/dl (6.5 mmol/l). In addition to the blood pressure medications, half of this group got 10 mg of Lipitor (atorvastatin) per day while the other half got a placebo along with their blood pressure medication.

In the Lipitor/placebo arm of this study, the endpoint was defined as a non-fatal heart attack or death from heart disease.

After about 3.3 years, this arm of the study was discontinued because there were a significant number of either heart attacks of deaths from heart disease in the placebo group, and the researchers felt it was unethical to continue the study.

When the data were examined, it turned out that the group taking Lipitor experienced a 36 percent decrease in relative risk for heart disease. Thus the barrage of ads for Lipitor that followed.
Even websites aimed at doctors, using info from the Lipitor package insert, showed graphics designed to make any doctor grab for the Lipitor pre-printed prescription pad. This graph is 100 percent accurate, but, as we shall see, hugely misleading.

Lipitor vs placebo graph Absolute risk versus relative risk: Why you need to know the difference
Figure 1 below is a graphic showing what that 36 percent relative risk looks like.
Control vs Lipitor vs 36percent Absolute risk versus relative risk: Why you need to know the difference
Figure 1. Relative risk of 36 percent with a large absolute risk.
 
Pretty impressive, isn’t it? Makes you wonder why anyone wouldn’t want to take Lipitor. Problem is, Figure 1, which I made up, shows a relative risk differential of 36 percent, but it also shows a large absolute risk.

Looking at Figure 2 below, which shows the actual absolute risk of experiencing a heart attack or dying from heart disease in this study, you can see that the 36 percent relative risk reduction is accurate. But does it really make you want to stampede to the pharmacy to pick up your Lipitor? Remember, these subjects all had high blood pressure and two other risk factors for heart disease, yet their absolute risk is pretty low. Would you want to take a medicine that could give you muscle aches and pains along with muscle wasting, short term memory loss and possibly fatal liver or kidney damage based on the absolute risk shown in Figure 2?

LIpitor vs Control MI Absolute risk versus relative risk: Why you need to know the difference
Figure 2.  Relative risk of 36 percent with actual absolute risk in the Lipitor study. 
 
If you compare Figure 1 to Figure 2 above, both of which have the same relative risk, you can readily see that the absolute risk is extremely important. If the absolute risk is high, as it is in Figure 1 at the top, then the relative risk becomes important.

If, as in Figure 2, the real Lipitor vs placebo graphic, the absolute risk is small, then not so much so. Both Figure 1 and Figure 2 show the same relative risk, but not anywhere close to the same absolute risk. Which is why you always want to know the absolute risk before you make a decision on anything based on relative risk. Because absolute risk is usually pretty low in drug studies, the pharmaceutical industry typically uses the relative risk number to sell their medicines.

In the case of statins, most people go on statins because what they really fear is dropping dead of a heart attack, and they feel the statins are protective.

What happened in this study? There were two more deaths in the placebo group than in the Lipitor group, but that number wasn’t statistically significant. Figure 3 below is a graphic showing the absolute risk of having a fatal heart attack in the Lipitor trial.

Control vs Lipitor vs Fatal MI Absolute risk versus relative risk: Why you need to know the difference
Figure 3. Relative and absolute risk of fatal heart attack in the Lipitor study.
 
Not a huge absolute risk.

Let’s look at another example that is much more dramatic than the Lipitor study above.

I pulled a paper from the New England Journal of Medicine (NEJM) looking at coronary artery calcification (CAC) scores and risk for having a heart attack.

Approximately 20 percent of the plaque in coronary arteries is composed of calcium, which shows up on X-rays. But since the heart is constantly in motion, it’s difficult to see the calcium in standard X-rays of the chest. Specialized CT machines, however, can take extremely fast photos of the heart and actually see the calcium in the coronary arteries. A computer program then converts this calcium into a score, which is simply a number. If you have a zero score, you have no calcium accumulation, which means you probably don’t have any plaque to speak of. You could have some soft plaque that hasn’t yet calcified, but a zero CAC score is definitely a good one.

The NEJM study looked at CAC scores of 6,722 subjects and broke them out into one of four groups. Zero, CAC score of 1-100, CAC score of 101-300, and CAC score of greater than 300. These groups were followed for about 4 years on average for the main endpoint of the study, which was a major coronary event, defined as a heart attack or death from coronary heart disease.

You can see the relative and absolute risk in the graphics below. Upper left is zero CAC score. Upper right, 1-100 CAC score. The lower left is 101-300 CAC score, and the lower right is greater than a 300 CAC score.

CAC 0 vs CAC 1 100 vs MI Absolute risk versus relative risk: Why you need to know the difference
CAC 100 300 vs CAC300 vs MI Absolute risk versus relative risk: Why you need to know the differenceFigure 4. Relative and absolute risks for various CAC scores.  Zero upper left. 1-100 CAC upper right.
101-300 CAC lower left.  Greater than 300 lower right.
 
As you can see, the risk for a major coronary event is negligible with a CAC score of zero. And the risk increases as the CAC scores go up. The subjects in this study who had CAC scores above 300 experienced 19 times more major coronary events than did those with zero CAC scores. Which would mean a 300+ CAC score carries a relative risk of 1900 percent* as compared to a zero, which is humongous. But looking at the absolute risk tells a different story.

I’ve seen patients with 300+ CAC scores come unglued. Granted, it’s not a score you want to see, but it doesn’t mean you’re going to die the next day. A lot of people feel that way, though. Which is why knowing the absolute risk is important before you decompensate over a high relative risk for this or any condition.

If you would like to play around with these absolute and relative risk graphics, you can do so at this site.  Keep the site bookmarked, so the next time you come across a relative risk, look up the actual risk and graph it.  Only then will you know the true risk involved.

_____________________________________________________________________________________________
*I used a relative risk score based on the raw data from the study.  If you look at the actual study (linked above), you see different relative risks than what I have.  The ones in the study are lower because the authors extrapolate the data out longer and take into account when the events happened.  So, if you happen to have a high CAC score, your risk is lower than what I show above, which was strictly for illustrative purposes.
==================================================================
Read the complete article here.

===================
===================
20140721 Added this link to good video on relative risk. Well worth the watch.

Friday, January 17, 2014

Ascorbic Acid to Coronary Artery Calcium

Relation of Ascorbic Acid to Coronary Artery Calcium

The Coronary Artery Risk Development in Young Adults Study

Abstract

Ascorbic acid is an antioxidant nutrient possibly related to the development of atherosclerosis. To examine the relation between ascorbic acid and coronary artery calcium, an indicator of subclinical coronary disease, the authors analyzed data from 2,637 African-American and White men and women aged 18–30 years at baseline who were enrolled in the Coronary Artery Risk Development in Young Adults (CARDIA) Study (1985–2001). Participants completed diet histories at enrollment and year 7, and plasma ascorbic acid levels were obtained at year 10. Coronary artery computed tomography was performed at year 15. The authors calculated odds ratios in four biologically relevant plasma ascorbic acid categories, adjusting for possible confounding variables. When compared with men with high plasma ascorbic acid levels, men with low levels to marginally low levels had an increased prevalence of coronary artery calcium (multivariate odds ratio = 2.68, 95% confidence interval: 1.31, 5.48). Among women, the association was attenuated and nonsignificant (multivariate odds ratio = 1.50, 95% confidence interval: 0.58, 3.85). Ascorbic acid intakes from diet alone and diet plus supplements were not associated with coronary artery calcium. Low to marginally low plasma ascorbic acid levels were associated with a higher prevalence of coronary artery calcium among men but not among women.
.
.
.

DISCUSSION

Overall, our study produced mixed findings. The main positive finding was that low to marginally low plasma ascorbic acid levels (measured at year 10) were associated with an approximately threefold higher prevalence of coronary artery calcium (ascertained at year 15) among men independently of other cardiovascular disease risk factors, including smoking. We did not, however, observe a similar relation among women, perhaps because fewer women had coronary artery calcium, thereby limiting our statistical power to detect such an association. We also cannot exclude the possibility that our findings resulted from chance or residual confounding. The findings among men concur with findings from cross-sectional and longitudinal analyses of participants in the Second National Health and Nutrition Examination Survey (NHANES II) (7, 15). Simon et al. (15) reported previously that NHANES II participants with low to marginally low serum ascorbic acid levels had an increased prevalence of self-reported coronary heart disease. In the NHANES II Mortality Study, which followed participants for a mean of 14 years, Simon et al. (7) found a trend toward increased cardiovascular disease mortality among individuals with low to marginally low serum ascorbic acid levels. Similar to our current findings, those from the NHANES II Mortality Study did not reflect a relation between dietary intake of ascorbic acid and cardiovascular disease endpoints (7).
                 
Our current findings are also consistent with results from some observational studies that also reported low blood ascorbic acid levels to be a risk factor for coronary heart disease (35, 8, 16, 17). Not all observational studies have reported such an association (6, 18), and the few randomized trials examining the effect of vitamin C supplementation on coronary heart disease endpoints, typically in combination with other antioxidants, have produced inconsistent results, ranging from decreased risk to no effect to increased risk (1922). Specifically determining whether marginal vitamin C deficiency is a factor in the development of atherosclerotic coronary disease would be of considerable public health importance, since blood levels consistent with marginal deficiency are prevalent in the population (7) and readily modifiable.
                 
Conclusions based on our findings are qualified by limitations in the study design. We collected information on plasma levels of ascorbic acid 5 years before the coronary artery calcium measurement, but we do not have coronary artery calcium scores before year 15. Therefore, we cannot be certain that differences in plasma ascorbic acid preceded the development of coronary artery calcium since we cannot exclude the possibility that subclinical coronary disease lowered plasma ascorbic acid levels. The concern about the direction of causality is underscored in part because we were unable to find an association between dietary ascorbic acid intake (measured at baseline and year 7) and coronary artery calcium; that is, since blood levels of ascorbic acid are generally correlated with intake, a similar association between lower ascorbic acid intakes and coronary artery calcium would have been expected.
                 
There are several potential explanations for these findings. Because the dietary assessments were performed at baseline and year 7 and plasma ascorbic acid levels were assayed at year 10, we cannot exclude the possibility that dietary and supplement use changed during the intervening period, although we did find a weak, albeit statistically significant, correlation between dietary ascorbic acid intake and plasma ascorbic acid levels (r = 0.14; p < 0.0001). It is also possible that the dietary assessments were not sufficiently accurate or precise to permit the detection of the association. Prospective studies that examined dietary intake of ascorbic acid as a predictor of cardiovascular disease have produced contradictory results. An analysis of data from the First National Health and Nutrition Examination Survey Epidemiologic Follow-up Study found that individuals with the highest intakes of ascorbic acid had 25–50 percent lower cardiovascular disease mortality (23). Dietary ascorbic acid intake was also associated with a lower risk of coronary heart disease death among Finnish women (24) and a group of 747 noninstitutionalized elderly Massachusetts residents (25). The Nurses’ Health Study (26), the Health Professionals Follow-up Study (27), and others (28, 29), however, found no significant association between ascorbic acid intake and risk of coronary heart disease.
                 
Ascorbic acid may reduce the risk of cardiovascular disease by a number of mechanisms. Antioxidant status has been hypothesized to be an important factor in atherogenesis, and ascorbic acid is a highly effective water-soluble antioxidant capable of inhibiting lipid peroxidation (30, 31). In some studies, ascorbic acid blood levels and dietary intake have been associated with increased levels of high density lipoprotein cholesterol and decreased levels of total cholesterol (1, 32, 33). The inverse relation between plasma ascorbic acid levels and coronary artery calcium that we observed, however, was independent of cholesterol levels. Ascorbic acid promotes endothelial prostacyclin production (34), improves endothelium-dependent vasodilation (2), and is essential for vascular collagen formation, all factors that may be associated with cardiovascular disease risk. Despite the potential for ascorbic acid to lower the risk for cardiovascular disease, recent clinical trials using antioxidant cocktails that contain ascorbic acid have failed to lower cardiovascular disease risk (1922). We are unaware, however, of clinical trials using ascorbic acid supplementation specifically among individuals with low to marginally low blood levels, our postulated high-risk group.
                 
In addition to the limitations discussed, we were also limited by having only a single measurement of plasma ascorbic acid, which may not reflect long-term plasma concentrations optimally. However, plasma ascorbic acid levels reflect at least the previous several months of dietary intake, even during periods of seasonal variation (35), and are strongly correlated with leukocyte ascorbic acid levels, an indicator of tissue ascorbic acid levels (36, 37). We cannot exclude the possibility that our findings were affected by residual confounding (especially from smoking) or that plasma ascorbic acid levels were simply a healthy diet or lifestyle marker. The association of low plasma ascorbic acid levels with higher prevalence of coronary artery calcium among men was, however, independent of the effects of other lifestyle-related variables, such as education and exercise.
                 
In conclusion, we found that low to marginally low plasma ascorbic acid levels were independently associated with a higher prevalence of coronary artery calcium in young adult men but not in young adult women. Because we cannot exclude chance or residual confounding as an explanation of our findings, our results need to be confirmed by other investigators.
 

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
  34. 34.
  35. 35.
  36. 36.
  37. 37.
    Abstract/FREE Full ============================================================ Read the complete article here.

Monday, January 13, 2014

Does Wheat Cause Coronary Heart Disease?

Does Wheat Cause Coronary Heart Disease?

Introduction

Coronary heart disease (CHD) is the leading cause of deaths worldwide - killing 7 millions people every year. In the following text, we will see that wheat consumption is probably a risk factor for CHD.

Conventional Wisdom on Wheat

Most health organizations currently view wheat as a safe food except for people having celiac disease - affecting up to 1% of the population - and people having non-celiac gluten sensitivity. Also whole wheat - as part of whole-grains - is considered to be one of the healthiest food. In fact a diet rich in whole-grains is considered to be protective against CHD.

Why? Because observational studies consistently find that whole-grain consumption is associated with a decreased risk of CHD. Do these results contradict wheat consumption causing CHD?

Are Whole-Grains Protective Against CHD?

According to this study:
Whole-grain intake consistently has been associated with improved cardiovascular disease outcomes, but also with healthy lifestyles, in large observational studies. Intervention studies that assess the effects of whole-grains on biomarkers for CHD have mixed results.
Indeed many studies show that whole-grain consumption is associated with a decreased risk of CHD. But these studies are observational and can only show correlation but not causation.

In fact there is an health-conscious population bias in these studies: for example people consuming the most whole-grains also exercise more and smoke less:
Whole-grain intake and lifestyles
Data from Majken K Jensen et al., Intakes of whole grains, bran, and germ and the risk of coronary heart disease in men, 2004

Of course researchers adjust the data with these risk factors. But it is very difficult, maybe impossible, to adjust for all risk factors. For example the two previously cited studies did not adjust for important risk factors like socioeconomic status or social support.

A classic example of an occurrence of this bias can be found in hormone replacement therapy (HRT): observational studies had found that HRT was decreasing the risk of heart disease risk while a controlled study finally found that HRT was indeed slightly increasing the risk of heart disease.

A proof that this health-conscious bias could explain the seemingly protective effect of whole-grains can be found in randomized controlled studies: many of them fail to find any beneficial effect of whole-grains compared to refined grains.

So according to these randomized controlled studies whole-grains are neutral toward CHD risks. How then can we say that wheat causes CHD?

Are All Grains Created Equal?

Many randomized controlled studies compared wheat with other grains. These trials are usually quite short. So instead of looking at the number of heart attacks, short-term studies focus on risk predictors of CHD like weight gain or markers of inflammations. Apolipoprotein B (ApoB) level is another risk factor. It represents the number of LDL particles - often called “bad cholesterol”. It is now considered to be a better predictor than LDL-C - the amount of cholesterol contained in LDL particles. The lower the level of ApoB the lower is the risk of CHD.

Here are some results of these studies:
  • a study concluded that a bread diet may promote fat synthesis/accumulation compared with a rice diet
  • wheat increased BMI compared to flaxseed in a 12 months study
  • wheat increased ApoB level by 5.4% compared to flaxseed in a 3 weeks study
  • wheat increased ApoB level by 7.5% compared to flaxseed in a 3 months study
  • wheat increased ApoB level by 0.05 g/L compared to flaxseed in a 12 months study
  • oat decreased ApoB level by 13.7% while wheat had no significant effect in a 21 days study
  • wheat increased the number of LDL particles by 14% while oat decreased them by 5% in a 12 weeks study
  • ApoA to ApoB ratio (a risk predictor similar in efficiency to ApoB alone - here the higher the better) was increased by 4.7% for oat bran and 3.9% for rice bran compared to wheat bran in a 4 weeks study
These studies show that some grains like oat improve the risk factors of CHD compared to wheat. In addition, these studies often show an absolute improvement of the CHD risk profile in groups eating oat and an absolute deterioration in groups eating wheat. Although we cannot say for sure, it would suggest that oat is protective against CHD - which is confirmed by other studies - while wheat increase the risk of CHD.

That could help explaining why people eating more whole-grains are healthier in observational studies since it looks like that they eat more grains like rice and oat and less typically wheat-made food like white bread, pasta and doughnuts:
Whole-grain intake and different grain intakeData from Andersson A. et al., Intakes of whole grains, bran, and germ and the risk of coronary heart disease in men, 2007

Now let’s have a look at studies linking wheat and CHD.

Observational Studies on Wheat

Some observational studies linked wheat and waist circumference gains - waist circumference being a strong predictor of CHD:
  • a study showed a correlation between consumption of white bread and waist circumference gains
  • a study concluded that: ”reducing white bread, but not whole-grain bread consumption, within a Mediterranean-style food pattern setting is associated with lower gains in weight and abdominal fat
  • a Chinese study found that ”vegetable-rich food pattern was associated with higher risk of obesity” but as noted by obesity researcher Stephan Guyenet the association between obesity is in fact stronger with wheat flour than with vegetables
A more pertinent result is found in the data of a large observational study in China. Researchers analysed these data and found a 0.67 correlation between wheat flour intake and CHD. They also found a 0.58 correlation between wheat intake and BMI.
CHD mortality and wheat intake
From Denise Minger
But this is just a single unadjusted correlation and does not prove much. However blogger Denise Minger thoroughly analysed the data of this study and found that the association held strongly after multivariate analysis with any other variable available like latitude, BMI, smoking habits, fish consumption, etc.

Since it is an observational study it cannot prove anything but it is yet another evidence suggesting that wheat consumption causes CHD. Let’s now have a look at randomized controlled trials.

Randomized Controlled Trials on Wheat

In addition to the previous randomized controlled trials comparing wheat with other grains there are two additional studies suggesting that wheat consumption causes CHD.

The first one is a study involving rabbits. While studies involving animals are not always relevant to humans - especially studies with herbivore animals like rabbit - the results of this study are quite interesting.

The researchers fed rabbits an atherogenesis diet (i.e. promoting formation of fatty masses in arterial walls) with a supplement of cottonseed oil, hydrogenated cottonseed oil, wheat germ or sucrose. And as they concludes:
Severity of atherosclerosis after 5 months was greatest on the wheat germ-supplemented diet, whereas there were no differences among the other three groups.
The second study is the Diet And Reinfarction Trial (DART). In this 2-year randomized controlled trial, people who already had recovered from an heart attack were split into groups receiving various advices. The main result of this study was that the group advised to eat fatty fish had a reduction in mortality from CHD.

One other advice - the fibre advice - was:
to eat at least six slices of wholemeal bread per day, or an equivalent amount of cereal fibre from a mixture of wholemeal bread, high-fibre breakfast cereals and wheat bran
Seeing this advice we can guess that most of cereal fibres intake by this group was from wheat although we cannot be sure.

This advice resulted on a 22% death increase:
Total mortality in the fibre advice groupFrom Stephan Guyenet
However this result bordered on statistical significance: the 95% confidence interval being 0.99–1.65.
For people not familiar with statistics, a result is usually defined as statistically significant when there is less than 5% chance that the result is due to luck alone. Here there is a 95% probability that the relative risk is between 0.99 (1% decreased chance of dying) and 1.67 (67% increased chance of dying).

Since the probability that the fibre advice resulted in a protective or neutral effect was a little too high, this result has been quite overlooked. Had the study last a little longer, it would have raised way more suspicion toward whole-grains.

In fact, researchers found this effect to be statistically significant in a follow-up study. After adjusting for pre-existing conditions and medication use, we can see in the table 4 of this study an hazard ratio of 1.35 (95% CI 1.02, 1.80) for the 2-year period of the randomized controlled trial.

These results are quite telling: according to these researchers, a 2 year randomized controlled trial showed that advising people recovering from an heart attack to eat at least six slices of wholemeal bread per day resulted in a statistically significant 35% percent chance increase of CHD compared to people not receiving this advice.

Wheat, Vitamin D Deficiency And Heart Disease

Many studies found that vitamin D deficiency is associated with CHD.
However vitamin D deficiency does not seem to cause heart disease. For example several studies found that vitamin D supplementation did not prevent heart disease.
As this study concludes:
A lower vitamin D status was possibly associated with higher risk of cardiovascular disease. As a whole, trials showed no statistically significant effect of vitamin D supplementation on cardiometabolic outcomes.
Wheat consumption causing CHD could help explaining these results. A study found that wheat consumption depletes vitamin D reserves. That could explain why vitamin D deficiency is associated with heart disease and why it does not seem to cause it: both vitamin D deficiency and heart disease could be consequences of wheat consumption.

Of course this is not the only explanation. For example the DART study shows that fish consumption prevents CHD and fish is a food rich in vitamin D.

Not the Perfect Culprit

To be clear, if it seems likely that wheat consumption is a risk factor of CHD it is not the only one nor the primary one. There are many other factors like smoking, hypertension, lack of exercice or stress. Even among dietary factors wheat is probably not the main one. For example the DART study shows that the protective effect of fish intake is stronger than the adverse effect of wheat.

In addition, deleterious wheat effects might not affect everybody. One study showed that the ApoB level variation following wheat and oat bran intake was different depending on the genotype of the individuals. In another study whole-wheat intake worsened the lipid profile only in people having a specific genotype compared to refined wheat.

How the wheat is cooked may have a role too. Studies show that sourdough bread improve mineral bioavailability (such as magnesium, iron, and zinc) compared to yeast bread or uncooked whole-wheat. Also content in proteins with potential adverse consequences like gluten or wheat germ agglutinin differs depending of the food type.

Conclusion

There are strong evidences that wheat consumption is a risk factor for CHD. People at risk of CHD should avoid wheat as should those trying to lose weight. In all cases, stopping wheat consumption for a month for example to see how one feel without wheat is always a good idea since there is currently no available method to diagnose non-celiac wheat sensitivities and that even for celiac disease the average delay in diagnostic is 11 years in the US.

More studies looking at the links between wheat and CHD are urgently needed since CHD is the leading cause of deaths while wheat is the second most widely consumed food and whole-wheat is often advised to lower risk of CHD. Studies considering grains as a whole are bound to give inconsistent results since different grains seem to have opposite effects in the case of CHD. So as much as possible future studies should treat grains separately and consider things like type of wheat products and genetic variability.

===============================================================
Read the complete article here or here.