This week’s contribution from “lifestyle epidemiology” was the report — from a huge study of 359,387 participants from nine countries in Europe — that spare tires, love handles, and bulges (the kind you battle) just ain’t good for you. The scientific term for this term is “abdominal adiposity.” Thus, the title of the paper, General and abdominal adiposity and risk of death in Europe (N Engl J Med. 2008;359:2105-20), by T. Pischon and 46 other authors. This is also related to the concept that being shaped like a pear is healthier than being shaped like an apple. Apples, beware!
Once upon a time, nutritional epidemiologists and other anthropometric scientists thought that the measure of a man (or woman) was the BMI — the body mass index. The body mass index is computed by dividing a person’s weight in kilograms (W) by the square of height in meters (H): BMI = W ÷ H2. For weight in pounds and height in inches, the formula is BMI = 703.07 x W ÷ H2. Thus, the existence of BMI tables; online programs for automatically computing your BMI by inputting your height and weight; the NCHS/CDC/NIH/American standard medical categories of BMI:
Underweight = <18.5
Normal weight = 18.5-24.9
Overweight = 25-29.9
Obesity = BMI of 30 or greater;
and the voluminous medical literature showing that (1) overweight and obesity are increasing problems in the United States, the United Kingdom, much of the industrialized world and developing world, and (2) being overweight is bad for your health and being obese is much worse.· Indeed, it seems that as BMI increases upward from normal weight, not only are there monotonically increased risks of mortality and cardiovascular disease, there are also positive associations with dementia, cancers of the esophagus and gastric cardia (the gastroesophageal junction adjacent to the esophagus), and breast cancer.
To measure abdominal adiposity —appleness, if you will — in their New England Journal study, Pischon et al. measured waist circumference and waist-to-hip ratio. The 350,000+ participants were followed up for about ten years and 14,723 died. The authors reported that, “after adjustment for BMI, waist circumference and waist-to-hip ratio were strongly associated with the risk of death.” The relative risk among men with waist circumferences above the 95th percentile (>102.6 cm) was 2.1; the relative risk among women with waist circumferences·above the 95th percentile (>88.9 cm) was 1.8. For men with waist-to-hip ratios above the 95th percentile (>0.98), the relative risk of death was 1.7; for women with waist to hip ratios above the 95th percentile (>0.84), the relative risk was 1.5. The authors point out that even men and women with low-risk BMI’s had a relatively high probability of dying if their waist circumference or waist-to-hip ratio was high.
As with body mass index, there’s a rather large medical literature on abdominal adiposity and its association with health. It’s related to coronary heart disease, pulmonary function, insulin resistance, type 2 diabetes, gallstone disease, and metabolic syndrome. And a time trend study of NCHS’s National Health and Nutrition Examination Surveys (NHANES) between 1960 and 2000 shows an increase in the prevalence of abdominal obesity in the United States, which has “…ominous public health implications across the entire population, particularly among normal weight subjects.” But cheer up. (Or not.) Evidence suggests that exercise does reduce abdominal adiposity.
Those of you with a more practical, activist frame of mind may be interested in The Ashwell Shape Chart. In 1995 the British Medical Journal (BMJ) published a paper by Lean, Han & Morrison arguing that waist circumference — rather than BMI — could be used in health promotion programs to identify individuals who should seek and be offered weight management. Men with waist circumference >/=94 cm and women with waist circumference >/=80 cm should gain no further weight; men with waist circumference >/=102 cm and women with waist circumference >/=88 cm should reduce their weight. In another paper in the BMJ in 1995 Han, van Leer, Seidell, & Lean reported that larger waist circumferences did identify people with increased prevalences of cardiovascular risks (i.e., high risk total plasma cholesterol concentrations, high density lipoprotein cholesterol concentrations, and blood pressures) in a cross-sectional study. In true BMJ fashion, this naturally resulted in the spontaneous generation of a multitude of Letters to the Editor, three of which were: “Ratio of waist circumference to height may be better indicator of need for weight management” by Ashwell & Lejeune; “Ratio of waist circumference to height is strong predictor of intra-abdominal fat” by Ashwell, Cole & Dixon; and “Ratio of waist circumference to height is better predictor of death than body mass index” by Cox & Whichelow. In the first letter the authors·used data from the 1992 health survey to show that the ratio of waist circumference to height was more highly correlated with the risk of coronary heart disease than any other anthropometric measure in both men and women — and much more highly than BMI. In the second letter, the authors used computed tomography data to show that “the ratio of waist circumference to height is the best simple anthropometric predictor of intra-abdominal fat in men and women.” In the third letter, the authors used data from a prospective longitudinal follow-up of British adults to establish that, “there was a linear trend with the ratio of waist circumference to height for both all cause mortality and cardiovascular mortality in women and for cardiovascular mortality in men.” There was, however, no consistent trend observed for body mass index.
The first author of two of these BMJ letters and creator of the Ashwell Shape Chart mentioned above is Dr. Margaret Ashwell. Dr. Ashwell·is a British nutrition researcher who has taken the argument that the ratio of waist circumference to height is superior to BMI to its logical conclusions, both scientifically, in the form of journal articles, and practically, in the form of the Ashwell Shape Chart.
I would love to discuss these issues in more detail, but I’m still having computer problems. I thought I had fixed my wireless router, but it keeps semi-crashing, and I’m afraid it’s going to crash completely before I post this article. For an iconoclastic, more critical viewpoint on the BMI, obesity, etc, check out the series of articles by Patrick Basham and John Lui here, here, here, here, here, here, here, here, and elsewhere on Spiked Online. I’d be interested in hearing your opinions.
I’ve been having frustrating computer problems for the last couple of weeks. An electricity surge switched off the circuit breaker for part of the house. When I turned the electricity back on, I discovered that both our computers and the wireless router were having major hardware problems. I actually enjoy fixing these things — unless I’m unsuccessful. I’ve now repaired the wireless router and one PC. The other PC — the one where I had started writing several blog posts and where I had most scientific papers (PDFs) stored – is now completely dead. I’ve decided to give up and bring it to a repair shop. (And yes — I’ve made that stupidest of all computer mistakes. I hadn’t backed up the hard drive for a few months.) I’m 99% sure that the hard disk is still intact.
Meanwhile, if you’ve never heard of the Ig® Nobel Prizes and are in a need of a a laugh, take look at this. They’re awarded at Harvard every October and their motto is, “Research that makes people LAUGH and then THINK.” Nature has said that the Ig Nobel awards “are arguably the highlight of the scientific calendar.”
Incidentally, you’ll see that the Ig® Nobel Prizes are sponsored by Improbable Research. My personal nominee for improbable ”epidemiological” research of the last few years is: Jasienska et al. Large breasts and narrow waists indicate high reproductive potential in women. Proceedings of the Royal Society/Biological Sciences 2004; 271:1213-1217. But take a look at the paper and you can be the judge.
I have an op-ed essay in the Atlanta Journal-Constitution. The title is “Measles not worth the risk. Prevention from diseases outweighs MMR side effects.”
Obviously I’m “outing” myself, but all in a good public health and scientific cause.
Please note that I have never worked with the vaccine industry, nor have I ever received even one cent from any pharmaceutical company.
Comments are welcome, but be civilized. Also: Be patient. The spam filter tends to be agonizingly slow.
Since I started this blog in April, I can’t count the number of times I’ve come across statements on other blogs like, “Epidemiology is just statistical manipulation of data.” This usually comes from an anti-vaccinationist commenter who clearly has never even opened an introductory epidemiology textbook.
So I was happy to see the article in the Washington Post on September 19th, entitled “For a Global Generation, Public Health Is a Hot Field.” The article begins with:
“Courses in epidemiology, public health and global health — three subjects that were not offered by most colleges a generation ago — are hot classes on campuses these days. They are drawing undergraduates to lecture halls in record numbers, prompting a scramble by colleges to hire faculty and import ready-made courses. Schools that have taught the subjects for years have expanded their offerings in response to surging demand.”
And later in the article: “The concepts introduced in basic epidemiology courses include causation and correlation, absolute risk and relative risk, biological plausibility and statistical uncertainty. Nearly all health stories in the news — from the possible hazards of bisphenol A in plastics and the theory that vaccines cause autism, to racial disparities in health care and missteps in the investigation of tainted peppers — are better understood with grounding in that discipline.”
The impetus for teaching epidemiology at the undergraduate college level has come from many places. In 1987, Dr. David Fraser, former president, Swarthmore College, and Adjunct Professor of Epidemiology, University of Pennsylvania, wrote an influential article in the New England Journal of Medicine, entitled “Epidemiology as a Liberal Art.” This link includes the entire paper — I highly recommend it.
But perhaps the major force behind the increase in epidemiology courses in colleges has been the Educated Citizen and Public Health Initiative, led by Dr. Richard Riegelman of George Washington University . (Dr. Riegelman is also the author of an excellent introductory text, entitled Studying a Study and Testing a Test: How to Read the Medical Evidence, which I’ve used in teaching both medical studients and undersgraduates.)
For more information on the Educated Citizen and Public Health Initiative, see:
Okay, I’m back to blogging, slowly but surely. Special thanks to Andrea, Anthony Cox, daedalus2u, and many others for your get well comments and e-mails. Thanks also to Science Mom and TheProbe for sending me e-mails with great ideas for future posts.
Recently the DISCOVER Magazine blog, Reality Base, asked former U.S. Surgeon General (1982-9), C. Everett Koop, the question, “What are the most important things the next U.S. president needs to do for science?”
His answer #1 was: “Appoint the next surgeon general with an eye to scientific and medical prowess, rather than make it a political appointment.”
I couldn’t agree more. So if McCain is elected president, even if I don’t expect him to choose somebody like Jack Geiger as Surgeon General (although I think Jack would be a great Surgeon General), the message from Koop is that McCain shouldn’t choose some physician just because he/she has been long-time Republican Party kiss-ass.
The same sort of advice goes for Obama, although it’s easier to find scientifically accomplished public health activists among liberals than among conservatives. But you never know — Dr. Koop was appointed by Ronald Reagan.
Also: As Dr. Koop probably knows, in order for the Surgeon General to use his or her scientific or medical prowess, the Office of the Surgeon General needs to become an independent entity in which the Surgeon General reports directly to the President. This is the way things were when Koop was Surgeon General. Now, the Surgeon General reports to the Assistant Secretary for Health. Indeed, the last Surgeon General, Dr. Richard Carmona (2002-6), had to have all his speeches and other public statements cleared by the office of the Assistant Secreatary of Health. So much for independence. (See the section entitled “Political interference” in the Wikipedia article on Dr. Carmona – it’s an accurate summary.)
While I’m on the subject of political appointments (or rather avoiding political appointments), I should add that Dr. Koop’s advice to the president should also be extended to the Director of NIH. In other words: Appoint the NIH Director with an eye to scientific prowess; avoid the temptation to use any political or ideological critera in choosing the NIH Director.
Lets’s look at the major accomplishments of the Directors of NIH before their appointments, starting in 1975. The quotes are from the official web site of the history of NIH.
Donald S. Fredrickson (1975-81): “…internationally known authority on lipid metabolism and its disorders…”
James B. Wyngaarden (1982-89): “…internationally recognized authority on the regulation of purine biosynthesis and the genetics of gout…”
Bernadine Healy (1991-93): “…chairman of the Research Institute of the Cleveland Clinic Foundation, where she directed the research programs of nine departments…” [No scientific accomplishments mentioned]
Harold E. Varmus (1993-99): “Winner of the Nobel Prize in 1989 for his work in cancer research…leader in the study of cancer-causing genes called ‘oncogenes,’ and an internationally recognized authority on retroviruses…”
Elias A. Zerhouni (2002- ): “…credited with developing imaging methods used for diagnosing cancer and cardiovascular disease. As one of the world’s premier experts in magnetic resonance imaging (MRI), he has extended the role of MRI from taking snapshots of gross anatomy to visualizing how the body works at the molecular level. He pioneered magnetic tagging, a non-invasive method of using MRI to track the motions of a heart in three dimensions. He is also renowned for refining an imaging technique called computed tomographic (CT) densitometry that helps discriminate between non-cancerous and cancerous nodules in the lung.”
President George H. W. Bush set an unfortunate precedent in 1991 when he appointed Bernadine Healy as Director of the NIH. The appointment was purely political, based on Healy’s lifetime support of the Republican Party. Although many feminists were overjoyed at the time, Dr. Healy was hardly a scientist. She was a career administrator.
Let’s not forget that the National Institutes of Health have often been called the greatest scientific institution in the history of the world. Bernadine Healy was about as qualified for the job of NIH Director as Sarah Palin is to be President of the United States.
(Recent quote from J.B. Handley: “If Dr. Healy is the Ted Williams of her field, Paul Offit is struggling to make his neighborhood T-Ball team–that’s how big she is.” Sorry, Mr. Handley, but the perception in the scientific community would essentially reverse this analogy. Paul Offit is undoubtedly the Ted Williams of his field. In baseball terms Bernadine Healy is more like the mediocre player who becomes a pretty good manager, and then goes into announcing. Baseball fan commenters are welcome to come up with someone; I can’t think of anyone offhand.)
So here’s hoping that the next president will appoint both a Surgeon General and an NIH Director with real ”scientific prowess.” Personally, I’d like to see qualified women in both jobs.
In the 19th century, people with respiratory ailments went to the Swiss Alps or the sea shore in hopes for a cure. Although most of my coughing and wheezing has faded away, my bronchial reactivity is not gone completely. So I’m in Aruba for a week with my wife.
I have to admit that this on-and-off feeling that a hippo is sitting on my chest is really getting to be a drag. But I saw my doctor on Thursday and he said that this is a perfectly normal part of recovery from acute bronchitis.
I have acute bronchitis. I’ve been too wiped out to to do anything but sleep (and cough and wheeze). I’ve been using an albuterol inhaler and I’m now taking an antibiotic. (Sticklers for evidence-based medicine, take note: in this case the antibiotic may be just a placebo. [See Fahey T, Smucny J, Becker L, Glazier R. Antibiotics for acute bronchitis. Cochrane Database of Systematic Reviews 2004, Issue 4. Art. No.: CD000245. DOI: 10.1002/14651858.CD000245.pub2.])
Hope to be back blogging in a few days.
I’ve received a couple of e-mails asking me to comment on what must be one of the most atrociously irresponsible medical news stories in recent memory. Unfortunately, I woke up this morning in pure migraine hell. It’s been more than ten hours now, I’ve taken 80 mg of eletriptan and a few doses of much stronger pain medication, but I still can’t leave my dark bedroom for long. When I look at a computer screen it feels like a roomful of angry cancer epidemiologists are throwing cell phones at my head.
Back to the darkness.
Last week I blogged about the 2007 DeSoto and Hitlan study, Blood levels of mercury are related to diagnosis of autism: a reanalysis of an important data set (Journal of Child Neurology 2007;22:1308-11), in a post entitled Epi Wonk’s Intro to Data Analysis.
Dr. DeSoto has posted a reply at her University of Northern Iowa web site, Frequently Asked Questions about DeSoto and Hitlan (2007), more specifically at http://www.uni.edu/desoto/epiwonk%20query.htm. In response, there are a few issues I should clarify.
1. In my post I stated: “In May 2007 Dr. Catherine DeSoto wrote to the Editorial Office of the Journal of Child Neurology expressing concern about what appeared to be obvious inconsistencies in the data analysis of the results section of the Ip st al article. Dr. DeSotos specific concern related to the statistical interpretation of the data.” As Dr. DeSoto states in her reply: “This is not accurate at all.” Her concern was that the statistical results reported by Ip et al. were just plain wrong. Ip et al. reported that the difference in the means between autistic cases and controls was not statistically signficant, a report that was clearly in error. DeSoto and Hitlan discovered that the difference in means was statisistically significant. The reason this is important is that a disagreement about statistical “interpretation” is (as Dr. DeSoto says) “something about which learned persons may disagree” — not an obvious error of the sort Ip et al. made.
2. In my original post I stated: “According to the abstract, DeSoto & Hitlan ‘found that the original p value was in error and that a significant relation does exist between the blood levels of mercury and diagnosis of an autism spectrum disorder.’” As written in my post , this might be seen as being taken out of context. This conclusion was actually qualified — the full sentence makes clear DeSoto & Hitland are speaking about within this data set: “We have reanalyzed the data set originally reported by Ip et al. in 2004 and have found that the original p value was in error and that a significant relation does exist between the blood levels of mercury and diagnosis of an autism spectrum disorder.” In other words, DeSoto & Hitlan did not draw conclusions beyond the data set.
3. Although I don’t think that I’m guilty of ad hominem attacks on Dr. DeSoto or Dr. Hitlan, it seems that there have been a number of such ad hominem attacks floating around the blogosphere. I think the following statement from Catherine DeSoto may be helpful to some readers in clearing up some unkind speculations: “I was invited by an attorney to testify/get involved in the vaccine court proceedings, but declined…”
4. Dr. DeSoto states: “I I try to be as clear as humanly possible, and hope that you yourself will revise your point three to avoid giving readers on either side of the issue from the impression that we feel we have proven a relationship between mercury and autism with this one data set.” I’m not sure what Dr. DeSoto is referring to here, but I never meant to say, or even imply, that DeSoto and Hitlan felt that they had proven a relationship between mercury and autism with this one data set.
There are a few issues upon which DeSoto & Hitlan and I continue to disagree:
1. In her her response, Dr. DeSoto states, “…I think your website, among other valuable things, serves to make it clear that the difference between autistic and control subjects shows up using a variety of statistical techniques, yes?” My answer to this is: yes and no. I invite readers to to check out my entire analysis, including the graphical results, since in my opinion the findings cannot be summarized in one statistical test. The entire difference between 81 cases and 54 controls is due to an excess of ASD cases in the upper part of the blood mercury distribution, i.e., greater than 25 nmol/L. After I posted my article I received e-mails from two toxicologists telling me that this finding was fascinating, was completely new to them, and needed to be replicated in other samples of autistic children. In other words, nobody would have hypothesized a priori the pattern of differences between ASD cases and controls that I reported. This reinforces my inclination to view my analysis of this data set as an exploratory analysis that needs to be replicated. This — along with the quantitative findings shown in my original post — leans me towards the view that either (1) statistical significance is meaningless in the context of this type of exploratory research; (2) if you pushed me, I’d say my results weren’t significant because they’re totally due to the post hoc discovery of a difference at greaster than 25 nmol/L.
On the other hand, I can’t seem to get away from those who insist on using one statistical test for the entire data set. I received several e-mails and comments arguing that the nonparametric Mann-Whitney Rank Sum Test (AKA the 2-sample Wilcoxon Rank Sum Test) would be appropriate for this data set. My results are shown in Comment section of the original post, but here they are again:
Mann-Whitney’s Statistic = 1730.0
Z statistic = 2.06
2 tailed p = 0.0395
Median difference = 3.00
95% Confidence Interval: 0.00 to 7.00
Okay, from a strictly formal statistical point of view, there is a statistically significant difference (p < 0.05) between the blood mercury distributions of the ASD case group and the control group. I think it’s also worth noting in this context that the geometric means were 11.1 for the control group and 14.4 for the ASD cases, a difference of 3.3 nmol/L. So if I close my eyes, pretend the two distributions aren’t bi- or tri-modal, and try to make a comparison of “measures of central tendency,” both the medians and the geometric means differ by about 3 nmol/L. Is a difference of 3 nmol/L in blood mercury levels between ASD children and controls clinically significant? I’ll let you be the judge.
2. In any event, I still stand by my conclusion: “We can conclude absolutely nothing about the association of ethylmercury [thimerosal] in vaccines to autism from these data“. These data should never have been used in any way in statements about the association between vaccines and autism. Shattuck should never have cited Ip et al. as a recent study that “failed to establish a connection between measles-mumps-rubella vaccination or the use of mercury-based vaccine preservative and autism.” Fombonne et al. should not have cited Ip et al. as a biological studiy of ethylmercury exposure that “failed to support the thimerosal hypothesis.” Why? Because most of the blood mercury measured in these Hong Kong children was almost certainly due to methymercury exposure, not thimerosal.
Nevertheless, I think it is still important to try to carry out data collection efforts in which mercury levels are measured and compared in autistic children and controls. This at least involves observational data collected at the individual level. Otherwise, we end up in a situation where people are making causal inferences about the association between mercury exposure and autism from truly awful ecological studies.
Unless you’re extremely vigilant and/or organized, it really is easy to miss important information. Just today I came cross a June 21st post on Ben Goldacre’s Bad Science blog on a May 27th PLoS Medicine article entitled, “How Do US Journalists Cover Treatments, Tests, Products, and Procedures? An Evaluation of 500 Stories.”
I also discovered a new blog (started in June), a healthy distrust, which is devoted primarily to issues like: science in the media, scientists versus journalists, accuracy of press coverage of science, the use of press releases by journalists (instead of actually reading the study), the reporting of unpublished research in the press (e.g., poster presentations from conferences [my example]), quoting the public as a source in scientific stories, and using quacks (my term) as sources to balance the story. Frankly, my brief description doesn’t do the blog justice. See: “The biggish picture - or why this blog exists.”
The author of the PLoS Medicine article is Gary Schwitzer, Associate Professor at the University of Minnesota School of Journalism and Mass Communication, Publisher of Health News Review, and blogger of the Schwitzer health news blog. The article is about a US Web site project, HealthNewsReview.org, which evaluates and grades health news coverage, notifying journalists of their grades. Schwitzer begins the article by briefly describing two similar projects:
(1) The Australian Media Doctor Web site, which monitors the health news coverage of 13 Australian news organizations.
(2) The Canadian Media Doctor Web site, which evaluates health news coverage by 12 Canadian news organizations.
HealthNewsReview.org monitors news coverage by the top 50 most widely circulated newspapers in the US; the most widely used wire service, the Associated Press; and the three leading newsweekly magazines — TIME, Newsweek, and U.S. News & World Report. Each weekday they watch the morning and evening newscasts of the three most watched television networks — ABC, CBS, and NBC.
SUMMARY POINTS OF THE ARTICLE
- The daily delivery of news stories about new treatments, tests, products, and procedures may have a profound — and perhaps harmful — impact on health care consumers.
- Health News Review evaluates and grades US health news coverage, notifying journalists of their grades.
- After almost two years and 500 stories, the project has found that journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms.
- Reporters and writers have been receptive to the feedback; editors and managers must be reached if change is to occur.
- Time (to research stories), space (in publications and broadcasts), and training of journalists can provide solutions to many of the journalistic shortcomings identified by the project.
In order to be eligible for review, a story must include a claim of efficacy or safety in a health care product or procedure (drug, device, diagnostic or screening test, surgical procedure, dietary recommendation, vitamin, supplement). The rating instrument used includes ten criteria addessed by the Association of Health Care Journalists’ Statement of Principles (A Statement of Principles for Health Care Journalists by Gary Schwitzer. American Journal of Bioethics 2004; 4: W9 - W13):
1. Adequately discusses costs.
2. Quantifies benefits.
3. Adequately explains and quantifies potential harms.
4. Compares the new idea with existing alternatives.
5. Seeks out independent sources and discloses potential conflicts of interest.
6. Avoids disease mongering.
(Journalists should “avoid promulgating the medicalization of normal states of or variations in health (e.g., baldness, menstruation, short stature, etc.). We also try to educate journalists about surrogate endpoints and about how risk factors are not diseases. With this criterion, we also remind them not to exaggerate the prevalence or incidence of a condition.”)
7. Reviews the study methodology or the quality of the evidence.
8. Establishes the true novelty of the idea.
9. Establishes the availability of the product or procedure.
10. Appears not to rely solely or largely on a news release.
In their evaluation of 500 US health news stories over 22 months, between 62% - 77% of stories failed to adequately address costs, harms, benefits, the quality of the evidence, and the existence of other options when covering health care products and procedures. Only 38% of stories were rated satisfactory for putting the intervention under discussion into the context of existing alternative options. Of the first 500 stories reviewed, 41 (8%) received their highest scores. They appear online at http://www.healthnewsreview.org/review/by_rating.php?rating=5. (As of today, the 22nd of July 2008, this has been updated to 73 five-star reviews out of 615 stories.)
Gary Schwitzer concludes the article by hoping that HealthNewsReview.org’s “evaluation of health news will lead news organizations — and all who engage in the dissemination of health news and information — to reevaluate their practices to better serve a more informed health care consumer population.”
I’d like to end today’s post by once again welcoming a healthy distrust to the blogosphere.