Archive Page 2

A possible link between immunization and asthma is of great concern to many parents. As the authors of the paper I discuss today have stated: “It is important that researchers clarify this issue, because… the perception that immunization causes asthma may become a significant determinant of parents’ attitudes toward routine vaccination of their children.”

Fortunately, Dr. Ran Balicer and his colleagues published a systematic literature review of this very subject in the journal Pediatrics in November 2007. Dr. Balicer and his co-authors are with the Department of Epidemiology at Ben-Gurion University of the Negev in Israel. I hope it makes everybody happy that neither Dr. Balicer nor any of his co-authors work for a pharmaceutical company or the CDC, nor did they accept any money from drug companies during their research. The title of their review is, “Is Childhood Vaccination Associated With Asthma? A Meta-Analysis Of Observational Studies.”

The aim of the study was to study the available evidence on the association of whole cell pertussis (whooping cough) and BCG vaccination with the risk of asthma in childhood and adolescence. The major electronic medical databases (Medline, National Library of Medicine Gateway, and Cochrane Library) were searched, and reference lists of publications were reviewed for relevant birth-cohort studies and randomized controlled trials from 1966 to March 2006. 71 original scientific studies were found and read in full.

Studies had to meet the following criteria to be included in the meta-analyses:
(1) Directly compared vaccinated and unvaccinated children.
(2) Validated vaccination status by medical charts.
(3) Used preset criteria to define asthma.

Seven studies of pertussis vaccination (with a total of 186,663 patients) and five studies of BCG vaccination (with a total of 41,479 patients) met the authors’ inclusion criteria. Here are the results in two tables. An odds ratio of 1.00 means that there is no relationship between immunization and asthma.


No statistically significant association was detected between either whole cell pertussis or BCG vaccination and incidence rates of asthma during childhood and adolescence. The authors conclude: “Currently available data…do not support an association, [causal] or protective, between receipt of the BCG or whole cell pertussis vccine and risk of asthma in childhood and adolescence.” These are the two vaccines that have been studied the most in this context. The authors hope, as do I, that “these findings could be used to rellieve parental concerns that could otherwise lead to vaccination refusal.”

As a scientist, I need to add two caveats:

1. This is still not a 100 percent settled issue for two reasons, which Balicer et al. openly acknowledge:
(A) “The multitude of biases in studies that have used a birth-cohort design stress the need for additional adequately controlled, large-scale studies.”
(B) Balicer et al. ranked studies on quality. The pertussis vaccination study that they ranked as the highest quality was the UK Retrospective Study of Farooqi and Hopkin. (Farooqi IS, Hopkin JM. Early childhood infection and atopic disorder. Thorax 1998; 53: 927-32.) This investigation did report a positive association between pertussis immunization and asthma, with an odds ratio of 1.44 (95% CI: 1.17 - 1.85), even though this was the only study that did find such an association.

2. A recent study of children born in Manitoba, Canada in 1995 found that Delay in diphtheria, pertussis, tetanus vaccination is associated with a reduced risk of childhood asthma (McDonald KL, Huq SI, Lix LM, Becker AB, Kozyrskyj AL. J Allergy Clin Immunol. 2008;121: 626-31). However, this was a study of the timing of vaccination among vaccinated children. As the authors of the study admit: “To our knowledge we are the first to report that delay in adminstration of the first dose of DPT immunization is significantly associated with reduced risk of developing asthma in childhood.”

In sum: A recent systematic literature review of high-quality studies that directly compared vaccinated and unvaccinated children, validated vaccination status by medical charts, and used preset criteria to define asthma found no association between childhood vaccination and asthma.

Sphere: Related Content

Data courtesy of DeSoto and Hitlan (2007)

Since I became involved in the question of vaccines and autism — and more specifically the question of mercury in vaccines and autism — every week I’ve received a few identical e-mails from anti-vaccinationists that consist of a list of references. It’s always the same references and I’ve come to think of it as “The List.” Always on the top of The List is Desoto MC, Hitlan RT. Blood levels of mercury are related to diagnosis of autism: a reanalysis of an important data set. Journal of Child Neurology. 2007;22:1308-11. I read the DeSoto and Hitlan paper back in April and was skeptical about the reported results then. However, I heard from an epidemiologist friend of mind that Dr. Catherine DeSoto was extremely courteous and forthcoming in answering questions about the paper, so I decided to let my skepticism simmer for a while.

Then a few days ago an important paper in Science was published on Identifying Autism Loci and Genes by Tracing Recent Shared Ancestry. Naturally the Science paper was reported by hundreds of newspapers and other media outlets. One of the best newspaper stories was a Washington Post article entitled, “Mental Activity May Affect Autism-Linked Genes.” Unfortunately, the comment section after the Washington Post article was completely hijacked by antivaccinationists who insisted that vaccines cause autism and that genetic studies of autism are part of a cover-up of the truth. And once again, one of the commenters presented “The List” with the DeSoto and Hitlan on the top.

My simmering skepticism boiled over and I decided to to take a closer look at the DeSoto & Hitlan paper. Obviously you need a lttle background here, especially since the history of the Desoto & Hitlan paper actually involves at least three publications.
(1) In June 2004 the Journal of Child Neurology published an article by Patrick Ip, Virginia Wong, Marco Ho, Joseph Lee, and Wilfred Wong of the University of Hong Kong. Ip et al. performed a case-control study to compare the hair and blood mercury levels of 82 children with autistic spectrum disorder (ASD) and a control group of 55 normal children. (Important note: I am NOT going to discuss the analyses of hair mecury levels, per DeSoto & Hitlan’s statement, “The hair analysis data is, in fact, interesting. But it is of secondary importance.”) The ASD cases included all ASD children actively folowed up from April to September 2000 in the Duchess of Kent Children’s Habilitation Hospital in Hong Kong. All ASD children were assessed by Virginia Wong. The diagnosis of ASD was made only if they fulfilled the DSM-IV diagnostic criteria for autism and undergone a structured interview using the Autism Diagnostic Interview-Revised. The control group consisted of “normal children who had mild viral illness and were admitted to the pediatric ward of [Hong Kong’s] Queen Mary Hospital.” Ip et al. reported that there were no differences in mean mercury levels. The mean blood mercury levels of the ASD case and control groups were were reported to be 19.53 and 17.68 nmol/L, respectively (P = 0.15), a difference of 1.85. Ip et al concluded that “there is no causal relationship between mercury as an environmental neurotoxin and autism.” The authors also noted: the “blood mercury levels of both autistic and normal children in Hong Kong were elevated [compared to other populations around the world];” and “this study is limited by the sample size and culture because Hong Kong Chinese are famous for eating seafood.” (Ip P, Wong V, Ho M, et al. Mercury exposure in children with autistic spectrum disorder: case-control study. J Child Neurol. 2004;19:431-4. Erratum in: J Child Neurol. 2007;22:1324.)

(2) In May 2007 Dr. Catherine DeSoto wrote to the Editorial Office of the Journal of Child Neurology expressing concern about what appeared to be obvious inconsistencies in the data analysis of the results section of the Ip st al article. Dr. DeSoto’s specific concern related to the statistical interpretation of the data. Dr. Roger Brumback, the Editor-in-Chief of the Journal of Child Neurology contacted Virginia Wong, the corresponding author of the Ip et al article, and requested the original data. Professor Wong provided a spreadsheet of the original data. So all of the original data can be found in two tables at: Brumback RA. Note From Editor-in-Chief About Erratum for Ip et al Article. J Child Neurol 2007;22:1321-1323. These are the data that I used for my analyses below.

(3) At the request of the Editor-in-Chief of the Journal of Child Neurology, Dr. Catherine DeSoto and Dr. Robert Hitlan performed an analysis of the original data, which was published as a special article in the November 2007 issue of the journal. According to the abstract, DeSoto & Hitlan “found that the original p value was in error and that a significant relation does exist between the blood levels of mercury and diagnosis of an autism spectrum disorder.” A few details about Desoto & Hitlan’s analysis would be in order here, but there aren’t many details. (This was the first reason that I was skeptical about the paper.) The authors do mention that they excluded two outliers that were greater than 3 standard deviations above the mean. I have absolutely no problem with this — in fact, I agree with DeSoto & Hitlan that it was a good idea. What I find unusual is that the authors mention only one of the outlying values — a blood mercury level of 98 nmol/L in the ASD group. I had to go to the original data to figure out that the other outlier they excluded, which was a value of 74 in the control group.

Here is the total extent of the results section regarding blood mercury levels: “Logistic regression was performed using blood mercury level as the predictor and the autistic/control group as the criterion. Results of this reanalysis indicate that blood mercury can be used to predict autism diagnosis. Data included: r = .20, r2 = .04, F(1,133) =5.76, P = .017. This finding indicates that there is a statistically significant relationship between mercury levels in the blood and diagnosis of an autism spectrum disorder.” That’s it for results. I’m going to skip any discussion of the r and r2, since they’re not immediately relevant to this discussion and they’re just complex enough to confuse a lot of people (but see below). This leaves us with an F-test from a logistic regression and a highly significant P-value. The authors don’t say which logistic regression statistical package they used.  The F-test seems to be a test of whether the mean blood mercury levels of the ASD case group and the control group are different — the same hypothesis Ip et al. were testing — but this is unclear. Again this seems most unusual to me, but DeSoto & Hitlan do not provide the reader with means for either of the two groups. Fortunately, my epidemiologist friend (mentioned above) e-mailed Dr. DeSoto and she responded almost immediately with the missing information (which I’ve confirmed in my own analyses): With two outliers removed:

Mean blood mercury level in control group: 13.59 nmol/L

Mean blood mercury level in ASD cases: 18.57 nmol/L

Difference between groups: 4.98 nmol/L
95% confidence interval for
difference between groups: 0.88 - 9.1

I just don’t undertand why DeSoto & Hitlan didn’t provide these data in their paper.

In any event, now we’ve learned a little bit more, but I was still skeptical of these analyses for another reason. Ip et al. state outright that they performed a student’s t test to compare the means of the two groups. DeSoto & Hitlan never come right out and say that they’re interesting in comparing means, but it’s certainly implied. However, a comparison of arithmetic means, and certainly the use of the t test, assumes that we’re comparing two normally distributed samples. Although I’d never analyzed blood mercury before, I have analyzed blood lead levels. In my experience, blood lead levels are never normally distributed. This is why we use geometric means and percentiles — not arithmetic means — when we report descriptive statistics on blood lead levels. So I was skeptical about whether blood mercury levels would be normally distributed in children from Hong Kong.

The first thing I always do — and I always told my students to do this — is to actually LOOK at the data. It’s tempting to start out by looking at the ASD cases, by my advice is that it’s wiser to check out the control group first. I’ve excluded an outlier, so there’s 54 controls. Since there’s an unequal number of controls and cases, it’s easier to compare the two groups if we use percentages instead of raw numbers. So here’s the percentage distribution of the control group:

Some people don’t like these skinny little bars that PowerPoint provides in its histograms, so here’s the identical data shown in an “area under the curve” type chart:

If these data are normally distributed, or anything close to normally distributed, than I’m Bernadine Healy. In fact, trying to choose a “measure of central tendency” for these data is pretty much hopeless. The arithmetic mean of 13.6 is essentially meaningless. (No pun intended.) There were 54 controls. 10 of the controls have blood mercury values of 5.0 nmol/L, which means 5 is the mode, but that doesn’t help us much either. 6 controls had a value of 8.0, but saying 8 is a second mode would be silly. The best thing to do is look at the data and describe what’s actually there: There’s a cluster of 36 controls with values between 5 and 14 nmol/L that’s very heavily skewed such that the mode of the cluster is 5. There’s a second cluster of 13 controls with values between 17 and 24. Then there are 5 controls scattered across higher values between 33 and 42. One useful aspect of looking at the controls first is that it gave me a opportunity to choose an unbiased cut-off point for my odds ratio analysis. Since the literature doesn’t provide definitive advice for a “high” blood mercury level for Hong Kong children, and these controls have this nice space with no values between 14 and 17, I decided to define greater than 16 nmol/L as a “high” mercury level for my odds ratio analysis. Now let’s look at the data for the ASD cases. Again this is a percentage distribution.


Once again we have a distribution of values that’s not even close to normally distributed. There were 81 ASD cases. 14 cases had a blood mercury value of 5.0. I suppose you could say there was a second mode at about 20 nmol/L, since there were 6 cases with a value of 20 and 4 cases with a value of 23. What does the arithmetic mean of 18.6 signify in a distribution like this? Very little, I think. Here’s a chart showing the percentage distributions of the ASD cases and the controls compared:

There were so many more blood mercury values between 5 and 10 than in any other intervals, I’ve shown these as individual categories. Then I’ve categorized blood mercury levels in 5 nmol/L cetegories. So: Is there a difference between these two distributions? And how would we characterize the difference? It looks like the main difference is that the ASD cases have more mercury blood values at the upper end of the distribution than do the controls. By “the upper end of the distribution” I mean values greater than 25 nmol/L. In fact, that’s just what’s going on. Of the 54 controls, there were only 5 children with blood mercury levels greater than 25 (and the greatest value was 42). Of the 81 ASD cases, there were 21 children with values greater than 25, with 4 values between 41 and 45 and a high value of 59. So how do we go about carrying out a “formal” statistical comaprison of these two groups. First, any analysis involving a comparison of arithmetic means, such as a t-test, or a logistic regression in the form that DeSoto & Hitlan used (with blood mercury entered simply as a “continuous” variable) would be wrong. Why? Because the blood mercury values of these two samples just don’t come from normally distributed populations or anything close. Second, it’s common to calculate geometric means for blood mercury levels. in these two samples, the geometric means were 11.1 for the control group and 14.4 for the ASD cases, a difference of 3.3. A formal statistical comparison of the geometric means would be a bit more complex, because it would involve a logarithmic transformation of the blood mercury values. But the purpose of the log transformation is to make the distributions normal and there’s no way you’re going to make these two distributions normal unless you get somebody to jump up and down on the bars at the value for 5.0 until they almost disappear. (Any nominees for jumpers out there? Certified data fudgers?) So formal comparison of geometric means would also be wrong.

This leaves us with an analytic method that makes no assumptions about the distributions of the cases and controls — the calculation of an odds ratio or odds ratios. Since this post is too long already, I’m not going to explain what an odds ratio is except to say (1) it’s the optimal measure of strength of association in a case-control study and (2) please don’t make the mistake of assuming that a prospective cohort study of blood mercury levels and autism would have found a relative risk, or risk ratio, or rate ratio similar to the odds ratios I’m about to show. To learn more about the odds ratio, read the article in the British Medical Journal series on medical statistics or Google odds ratio. The Wikipedia article on “Odds Ratio” is okay, but not great. For an explanation of confidence intervals, see “Statistical Criteria in the Interpretation of Epidemiologic Data” and “Beyond the Confidence Interval.” So here are the results of my analysis:

Odds Ratio
(with 95% Confidence Interval)

Using blood mercury cut-off point of 17 nmol/L
(above 16 considered high mercury level)

Odds Ratio = 1.86

95% Exact* CI: 0.86 - 4.06

p = 0.126
(Chi-square = 2.34)

*Exact confidence interval calculated using the method of Mehta CR. The exact analysis of contingency tables in medical research. Stat Methods Med Res. 1994;3:135-56.

But wait. I felt a sudden disturbance in the Force, as though thousands of biostatisticians are writhing in agony because I used only two categories and “didn’t take advantage of all of the data.” So let’s do a trend analysis, using the value 5 nmol/L as the reference category (where OR = 1.00):

Blood Mercury Odds Ratio
5.00 1.00
6 to 10 0.63
11 to 15 0.98
16 to 20 1.07
21 to 25 1.00

This isn’t a complete trend analysis, obviously. When I stop at 25 nmol/L, the chi-square for linear trend is 0,378 and the p-value is 0.54. One of the great things about entering data by hand and actually LOOKING at the data while you do it is that you can stop and notice certain things. Like, for example: in these data there’s no significant difference between the two distributions under 25 nmol/L. So any difference between the blood mercury distributions of the cases and controls is being “driven” by an excess of ASD cases with values above 25.

In order to do a proper chi-square analysis for trend, one really needs at least 5 individuals in each cell. So I had to group all the higher values together in one category at 26 nmol/L and greater:

Blood Mercury Odds Ratio
5.00 1.00
6 to 10 0.63
11 to 15 0.98
16 to 20 1.07
21 to 25 1.00
26 and greater 3.00

Chi-square for linear trend = 5.897
p-value = 0.015

So the linear trend is statistically significant, but it’s completely “driven” by the 21 ASD cases with blood mercury levels of greater than 25 nmol/L. At this point there’s a real temptation to analyze the data using a cut-off point of 25. This is post-hoc analysis based on what we’ve seen in the data, so it’s questionable, but I’ll go ahead with it anyway:

Odds Ratio
(with 95% Confidence Interval)

Post-hoc analysis
Using blood mercury cut-off point of 25 nmol/L
(above 25 considered high mercury level)

Odds Ratio = 3.4

95% Exact CI: 1.1 - 12.4

Logistic regression analysis

Now that we have a much better picture of differences between the cases and controls, I think it’s okay to run a logistic regression analysis. These are the results:

Chi Square= 5.9955; df=1; p= 0.014

Odds Ratio = 1.04
95% Confidence Interval: 1.005 to 1.075

The odds ratio can be interpreted as follows: For every 1 nmol/l increase in blood mercury, the difference between ASD cases and controls increases by an odds of about 0.04. Note that this effect size is “on average.” There’s obviously no way of knowing simply from this effect size estimate (OR = 1.04) that all of the differences between ASD cases and controls occurs at greater than 25 nmol/L.

Conclusions

1. I want to emphasize that this post is in no way meant as an ad hominem attack on Dr. DeSoto or Dr. Hitlan or the Editor of the Journal of Child Neurology. I ask commenters to refrain such attacks in the discussion.

2. Indeed the main point of this post is that data analysts should “look before they jump.” Look at the data carefully using visual methods like the charts above, or carry out detailed cross-tabulations, before you jump in and start running logistic regressions, etc.

3. I’m not making any assumptions about what DeSoto & Hitlan did or did not do in exploratory or preliminary analyses. But all I have to work with is what’s in the published paper. The paper is four pages long, yet only one 8-line paragraph is devoted to the main result. On the other hand, three relatively long paragaphs are devoted to lecturing Ip and colleagues on why they (Ip et al.) should have used a one-tailed test.

4. This is a relatively small data set with weird and unstable distributions of blood mercury . Unfortunately, there are very few data sets with information on blood mercury that include both autism cases and a control group. Unfortunately, we therefore must to consider it an “important data set.”

5. The analysis of Ip at al. (2004) and the analysis of DeSoto and Hitlan (2007) in which the mean blood mercury levels of ASD cases and controls were compared were statistically inappropriate. Any argument that the statistically significant p-value found by DeSoto & Hitlan just goes to show the “robustness” of the t-test is absurd.

6. DeSoto and Hitlan (2007) concluded that “a significant relation does exist between the blood levels of mercury and diagnosis of an autism spectrum disorder.” I disagree. In my opinion, this statement is too strong.

7. What is my conclusion about what this data set tells us about the association between blood mercury and autistic spectrum disorder? Not much. I don’t think it shows a significant relationship. On the other hand (and this is important), I don’t think that it shows that there is not a relationship either.

In my pre-planned dichotomous analysis above, I found an odds ratio of 1.86, with a lower 95% confidence limit of 0.86. An odds ratio of 1.86 is of moderate strength, but this is clearly not statistically significant. The trend analysis shows that odds ratios are stable (i.e., consistently close to 1.00) until we reach blood levels higher than 25 nmol/L, when the odds ratio is 3.00. In a post-hoc analysis using 25 nmol/L, I found an odds ratio of 3.4, with a 95% confidence interval of 1.1 to 12.4.  You can see the logistic regression findings above, but my opinion is that these are the least important findings of the entire series of analyses.  We did find a “statistically significant” odds ratio of 1.04 (95% CI: 1.005 to 1.075; p = 0.014), but this tells us much less than the graphical analysis and the trend analysis of odds ratios.

Given these results from a case-control study with such a small sample size, these are really of the “more research is needed” variety. Again, my opinion: I don’t think there’s a significant relationship. Nor do I think there’s definitively not a relationship.

8. DeSoto and Hitlan (2007) report an r of .20 and an r2 of .04. They then devote part of the last paragraph of their paper discussing why an “effect size” of .04 is important. This would have to be a subject for a whole other post, but like most epidemiologists (and sociologists and econometricians), I consider correlational statistics like r’s and R2’s essentially useless as measures of effect. Class: for tomorrow, read the classic paper, “The fallacy of employing standardized regression coefficients and correlations as measures of effect.” I’ll probably do a post on the subject anyway, but be ready for a pop quiz.

9. We can conclude absolutely nothing about the association of ethylmercury in vaccines to autism from these data.

10. As usual, your questions and comments are welcome. Agree, disagree, or whatever, but be civilized.



Important note and apologies to Drs. Desoto and Hitlan, Ken, efrique, and my readers: The original article that I posted on Wednesday, July 16th, has been revised on the afternoon of Saturday, July 19th. I was somewhat puzzled by Ken and efrique’s comments. Then I realized that I had not published the final version of my post on July 16th, but an earlier draft. In other words, I screwed up. That’s what happens when you blog at 4:00 in the morning. Thank you, Ken and efrique for your comments. 

Essentially the changes are these: I have performed my own logistic regression analysis, but I have NOT changed any of my conclusions. There are also a few changes in the paragraph in which I describe DeSoto and Hitlan’s Results section.

Sphere: Related Content

 

Increases Seen in Teen Birth, Low Birth Weight

The nation’s fourth and eighth graders scored higher in reading and mathematics than they did during their last national assessment, according to the federal government’s latest annual statistical report on the well-being of the nation’s children. Not all the report’s findings were positive; there also were increases in the adolescent birth rate and the proportion of infants born at low birthweight.

These and other findings are described in America’s Children in Brief: Key National Indicators of Well-Being, 2008. The report is compiled by the Federal Interagency Forum on Child and Family Statistics, a working group of Federal agencies that collect, analyze, and report data on issues related to children and families, with partners in private research organizations. It serves as a report card on the status of the nation’s children and youth, presenting statistics compiled by a number of federal agencies in one convenient reference.

“In 2007, scores of fourth and eighth graders were higher in mathematics than in all previous assessments and higher in reading than in 2005,” said Valena Plisko, associate commissioner of the National Center for Education Statistics, a part of the U.S. Department of Education.

This year’s report also saw an increase in low birthweight infants (less than 5 pounds 8 ounces). Low birthweight infants are at increased risk for infant death and such lifelong disabilities as blindness, deafness and cerebral palsy.

“This trend reflects an increase in the number of infants born prematurely, the largest category of low birthweight infants,” said Duane Alexander, M.D., director of the Eunice Kennedy Shriver National Institute of Child Health and Human Development at the National Institutes of Health. Although not all the reasons for the increase are known, infertility therapies, delayed childbearing and an increase in multiple births may be contributing factors.

The birth rate among adolescent girls ages 15 to 17 also increased, from 21 live births for every 1,000 girls in 2005, to 22 per 1,000 in 2006. This was the first increase in the past 15 years.

Among the favorable changes in the report were a decline in childhood deaths from injuries and a decrease in the percentage of eighth graders who smoked daily.

The Forum’s Web site at http://childstats.gov contains all data updates and detailed statistical information accompanying this year’s America’s Children in Brief report. As in previous years, not all statistics are collected on an annual basis and so some data in the Brief may be unchanged from last year’s report.

Members of the public may access the report online at http://childstats.gov. Alternatively, members of the public also may obtain printed copies from the Health Resources and Services Administration, Information Center, P.O. Box 2910, Merrifield, VA 22116, by calling 1-888-Ask-HRSA (1-888-275-4772), or by emailing ask@hrsa.gov.

The Forum alternates publishing a detailed report, America’s Children: Key National Indicators of Well-Being, with a summary version that highlights selected indicators. This year, the Forum is publishing America’s Children in Brief; it will publish the more detailed report in 2009.

The data tables and figures for all the indicators in this year’s brief are available at http://childstats.gov.

Family and Social Environment

The birth rate for unmarried women ages 15–44 increased from 48 births per 1,000 unmarried women in 2005 to 51 births for every 1,000 in 2006. The report describes a long-term increase in the unmarried birth rate between 1960 and 1994, followed by a “relatively stable” unmarried birth rate between the mid-1990s and 2002 and a rapid rise since 2002. A related measure, the proportion of births to unmarried women, also saw an increase; 38 percent of all births were to unmarried women in 2006, up from 37 percent of births in 2005.

The adolescent birth rate (among married and unmarried adolescents) increased from 21 births per 1,000 teenage girls ages 15–17 in 2005 to 22 births per 1,000 girls in 2006. The 2006 increase was the first seen in this measure since the increase between 1990 and 1991.

Economic Circumstances

Measures of poverty status, secure parental employment, and food security did not change significantly from the previous year. In 2006, 17 percent of all children ages under age 18 lived in poverty. The percentage of children who had at least one parent working year round, full time was 78 percent, not different from 2005, but below the peak of 80 percent in 2000.

Health Care

One measure, the proportion of children with any form of health insurance for at least some time during the year, declined, from 89 percent in 2005, to 88 percent in 2006. For 2006, the number of children lacking any form of health insurance coverage for the entire year was 8.7 million, or 12 percent. For each year since 1996, between 85 and 90 percent of children have had health insurance at some point during the year.

The measure on health insurance coverage also examined the type of health insurance the children had. For 2006, 65 percent of children were covered by private health insurance and 30 percent of children were covered by public health insurance. (Types of health insurance coverage are not mutually exclusive. In a given year, some children may be covered by both private and public health insurance and so may have been counted more than once.)

In 2006, 81 percent of children ages 19–35 months received the recommended combined five-vaccine series (often referred to as the 4:3:1:3:3 combined series), a proportion unchanged from the previous year. Overall, coverage with the combined series as increased since 1996. In 2006, coverage with the series was higher among White, non-Hispanic children (82 percent) than among Black, non-Hispanic (77 percent) or Hispanic children (80 percent).

The combined series includes 4 or more doses of diphtheria, tetanus toxoids, and pertussis vaccines, diphtheria and tetanus toxoids, or diphtheria, tetanus toxoids, and any acellular pertussis vaccine (DTP/DT/DTaP); 3 doses of poliovirus; 1 or more doses of any measles-containing vaccine; 3 or more doses of Haemophilus influenzae type b (Hib) vaccine; plus 3 or more doses of Hepatitis B vaccine. The recommended 2008 immunization schedule for children is available at http://www.cdc.gov/vaccines/recs/schedules/child-schedule.htm#printable.

Physical Environment and Safety

Injury deaths among children ages 5–14 declined from 8.2 per 100,000 children in 2004 to 7.7 per 100,000 in 2005. Also declining were injury deaths to adolescents ages 15 to 19 from 51.3 in 2004 to 49.8 in 2005. The report noted, however, that during the same time period death rates among adolescents due to homicides increased in 2005 for the first time since 1993.

Behavior

The proportion of eighth graders who reported smoking cigarettes daily during the past 30 days declined, from 4 percent in 2006, to 3 percent in 2007. This is a substantial decline from 1996, when 10 percent of eighth graders reported smoking cigarettes daily. Other risky behaviors, such as alcohol and drug use, were unchanged from their previous levels.

Health

The proportion of infants born at low birthweight increased from 8.2 percent in 2005 to 8.3 percent in 2006. The report noted that the percentage of low birthweight infants has increased for the last two decades. The percentage of low birthweight infants was 8.1 in 2004 and 7.0 in 1990. The report attributed the increase to such factors as an increase in the number of multiple births, obstetric interventions such as induction of labor and cesarean delivery; infertility therapies; and delayed childbearing.

Adolescent Health

If your interest is more focused on adolescent health, I recommend reading the new report America’s Children in Brief: Key National Indicators of Well-Being, 2008 along another recent government report, Adolescent Health in the United States, 2007. Copies of the Adolescent Health report can be obtained from its author, Andrea MacKay, at anm3@cdc.gov, at the National Center for Health Statistics.

Sphere: Related Content

I’m on vacation.

Or I’m about to be, and my wife is getting a wee bit impatient, so I need to start packing, etc.  You know the drill.

Several weeks ago I argued that much of the the observed increase in autistic disorder over time can be explained by three phenomenon: (1) Diagnostic criteria have changed over some part of the period during which increases have been observed. The diagnostic criteria for autistic disorder were broadened over time. (2) The average age of diagnosis for autistic disorder became younger. (3) The efficiency of ascertainment (the probability that a true case is identified) has increased with greater awareness of the condition, introduction of new treatments and new resources, advocacy, broadening of diagnostic experience, and changes in diagnostic practices.

In another post in May I described a small study from England that “adds to arguments against the view that incidence of autism has increased over recent decades, and suggests that changes in diagnostic criteria are the most likely reason for the rise in the number of cases diagnosed.” I pointed out, however, that this small study was only a first step and we need more studies with larger sample sizes.

In the July 2008 issue of the the Journal of Autism and Developmental Disorders a much larger and more elegant study has been published. I would love to give you a detailed description, but I’m about to go on vacation. So quoting the abstract will have to suffice for now.

Trends in Autism Prevalence: Diagnostic Substitution Revisited

By Helen Coo and Hélène Ouellette-Kuntz of the Department of Community Health and Epidemiology, Queens University; Jennifer E. V. Lloyd of the Human Early Learning Partnership (HELP); and three other authors.

SUMMARY: The authors examined trends in assignment of special education codes to British Columbia (BC) school children who had an autism code in at least 1 year between 1996 and 2004, inclusive. The proportion of children with an autism code increased from 12.3/10,000 in 1996 to 43.1/10,000 in 2004; 51.9% of this increase was attributable to children switching from another special education classification to autism (16.0/10,000). Taking into account the reverse situation (children with an autism code switching to another special education category (5.9/10.000)), diagnostic substitution accounted for at least one-third of the increase in autism prevalence over the study period.

Sphere: Related Content

I’ve spent part of the last couple of days reading some of the arguments against the modern U.S. childhood vaccine schedule at places like the National Vaccine Information Center, SafeMinds, and in the medical investigative reporting of Robert F. Kennedy, Jr. One of the statements you run across quite often is that today’s children — going back to children born in early 1990’s — are less healthy and generally much sicker than children of earlier generations. Where does this idea come from? It turns out that one of the places that it comes from is a 2007 commentary published in the Journal of the American Medical Association (JAMA), entitled “The Increase of Childhood Chronic Conditions in the United States.” This commentary, by James Perrin, Sheila Bloom, and Steven Gortmaker — all of Harvard University — got an enormous amount of media attention, so it’s no wonder that it’s cited so often. Unfortunately, one of the most-cited statements from the commentary is an observations about time trends, in which the authors’ interpretations of the data are just downright wrong.

Bloomberg News Service started out their report on the JAMA commentary with the shocker, “The number of American children with chronic illnesses has quadrupled since the time when some of their parents were kids, portending more disability and higher health costs for a new generation of adults, a study estimates.” This is based on the following sentence from the JAMA commentary: “In 1960, only 1.8% of US children and adolescents were noted by their parents to have a limitation of activity due to a health condition of more than 3 months’ duration; in 2004, these rates had increased to more than 7% or more than 5 million children and youth.” This sentence is so loaded with problems that I need to devote a whole post to it, especially since it’s been quoted so often by the media.

If you’re a stickler for exactitude, you might be happy about two things, but not for long. Both the 1960 percentage of 1.8 and the 2004 percentage of 7 come from the same annual survey, The National Health Interview Survey (NHIS) carried out by the National Center for Health Statistics (NCHS). Both refer to the “percentage of children with limitation of activity resulting from one or more chronic health conditions.” Unfortunately, there have been several changes over time in the NHIS that render the two percentages incomparable, but the biggest problems are that:
(1) the definition of “children” in standard NHIS tabulations changed from 0-16 years old to 0-17 years old; and
(2) the NHIS question about activity limitation due to a chronic health condition changed completely between 1996 and 1997.

National Health Interview Survey Questionnaire Probes for Determining Presence of Activity Limitations, 1969-1996

Age under 1 year:
Is __ limited in any way because of his health?
Age 1-5 years:
Is __ able to take part at all in ordinary play with other children?
Is he limited in the kind of play he can do because of his health?
Is he limited in the amount of play because of his health?
Age 6-16 years:
In terms of health would __ be able to go to school?
Does (would) -_ have to go to a certain type of school because of
his health?
Is he (would he be) limited in school attendance because of his health?
Is he limited in the kind or amount of other activities because of his
health?
All ages responding NO to the above probes:
Is __ limited in ANY WAY because of a disability or health? (Added in 1969)
[Note: At one point, the interviewer explains that the health condition or disability must have a duration of three months or more.]

(Source: National Health Interview Survey questionnaire 1980.)

National Health Interview Survey Questionnaire Probes for Determining Presence of Activity Limitations, 1997-present

Age Under 5 years: Parent is asked:
“Is (child’s name) limited in the kind or amount of play activities [he/she] can do because of a physical, mental, or emotional problem?”
[Note: At one point, the interviewer explains that the physical, mental, or emotional problem must be a condition that once acquired is not cured or has a duration of three months or more.]

Age 0-18 years, Parent is asked
(1) “Does (child’s name) receive Special Education Services or Early Intervention Services?”
(2) “Is (child’s name) limited in any activities because of physical, mental, or emotional problems?”

Age 3-17: Parent is also asked:
(3) “Because of a physical, mental, or emotional problem, does (child’s name) need the help of other persons with personal care needs, such as eating, bathing, dressing, or getting around inside the home?“
(4) “Because of a health problem does (child’s name) have difficulty walking without using any special equipment?”
(5) “Is (child’s name) limited in any way because of difficulty remembering or because of periods of confusion?”

(Source: National Health Interview Survey 2006.)*

As I noted above, the JAMA Commentary said: “In 1960…only 1.8% of US children and adolescents were noted by their parents to have a limitation of activity due to a health condition of more than 3 months’ duration…” The actual source for this percentage of 1.8 is an excellent 1984 paper by Paul Newacheck and colleagues in the American Journal of Publlic Health. For the subject of today’s post, the paper by Newacheck et al. is useful for two reasons: First, it has a table (Table 1) showing the year-by-year trend in per cent of children (under 17 years of age) with limitation of activity between 1960 and 1981. During that period the reported percentage increased from 1.8 to 3.8. Second, the authors examine in detail “the hypothesis that increased prevalence of activity limitations can be explained by changes in survey procedures, changes in awareness of illness, and/or changes in the size of the institutional population.” For example:

1. Prior to 1967, only those respondents who had reported a chronic condition in response to probes earlier in the interview were asked about the presence of an activity limitation.
2. Beginning in 1967, questions pertaining to activity limitation were asked of all sample persons.
3. Also beginning in 1967, activity limitation categories were read to the respondent; previously, respondents had been asked to choose an appropriate activity limitation response from a printed card.
4. Beginning in 1969, when persons responded negatively to the usual probes on activity imitation an additional question was asked: “Is __ limited in ANY WAY because of a disability or health?” It was then left to the coder to determine whether the response would be classified as an activity limitation.

Since the Newacheck et al paper is a public access article, I’ll leave it to you to read their thoughts on changes in the awareness of illness during 1960-1981 (”…increased awareness has not been a major contributor to the upward trend.”) and changes in the institutionalized population during the period. I do agree with Newacheck et al. that between the early 1960’s and the early 1980’s there probably was a near doubling of the proportion of children with limitations of activity due to chronic illness.

But what I’d really like to show you is more recent time trends, especially trends in the 1990’s and early part of this decade. Most of the the following data comes from the report, America’s Children: Key National Indicators of Well-Being, which has been published annually by the Federal Interagency Forum on Child and Family Statistics since 1997. For some years, America’s Children did not publish data for 0-4 year old children. For those years, I got the data from Health, United States, an annual report on trends in health statistics published by NCHS.

It seems that the standard NCHS definition of “children” for NHIS tabulations from 1960 through the 1980’s was “under 17 years of age.” I’m not sure why they chose 1984, but the Federal Interagency Forum on Child and Family Statistics seems to have asked NCHS to go back and do special tabulations for 1984, so they could use 1984 as their baseline or “benchmark” year.

TABLE 1. PERCENTAGE OF CHILDREN AGES 0-17 WITH ACTIVITY LIMITATION RESULTING FROM ONE OR MORE CHRONIC HEALTH CONDITIONS BY AGE, 1984

Year Total Age 0-4 Age 5-17
1984 5.0 2.5 6.1

 

Table 2 shows prevalence rates from 1990 to 1996, when the NHIS question about limitation of activity due to chronic disease was the same as in 1984.

TABLE 2. PERCENTAGE OF CHILDREN AGES 0-17 WITH ACTIVITY LIMITATION RESULTING FROM ONE OR MORE CHRONIC HEALTH CONDITIONS BY AGE, 1990-1996

Year Total Age 0-4 Age 5-17
1990 4.9 2.2 6.1
1991 5.8 2.4 7.2
1992 6.1 2.8 7.5
1993 6.6 2.8 7.5
1994 6.7 3.1 8.2
1995 6.0 2.7 7.4
1996 6.1 2.6 7.5

(Source: National Center for Health Statistics, National Health Interview Survey, 1990-1996.)

It looks like there was a jump in the prevalence rate between 1990 and 1991. Then the rates remained essentially stable between 1991 and 1996. Before you get all excited and conclude that “something happened” between 1990 and 1991 to “cause” these rates to increase, sit back and take ten deep breaths while I explain a few things. First, the entire increase occurred in 5 to 17 year-old children — not in infants and pre-schoolers. Second, these are not birth cohorts. The 5-17 year old children in the 1990 NHIS were born duing the period 1973-1985 and the 5-17 year old children in the 1991 NHIS were born in the perid 1974-1986. I hope you get the point.

Table 3 shows prevalence rates after 1996, when two things happened with the NHIS. First, as I mentioned above, the question on limitation of activity due to chronic illness changed enormously. Second, and equally important, between 1996 and 1997 a major NHIS Redesign occurred, which means that the sampling frame, sampling methodology, and many other statistical aspects of the survey changed. In short, both subject matter experts on childhood chronic disease and disability and statisticians agree that prevalence rates calculated from the NHIS in 1996 and before, and in 1997 and after, are not comparable.

TABLE 3. PERCENTAGE OF CHILDREN AGES 0-17 WITH ACTIVITY LIMITATION RESULTING FROM ONE OR MORE CHRONIC HEALTH CONDITIONS BY AGE, 1997-2006

Year Total Age 0-4 Age 5-17
1997 6.6 3.5 7.8
1998 No Data No Data No Data**
1999 6.0 3.1 7.0
2000 6.0 3.2 7.0
2001 6.8 3.3 8.0
2002 7.1 3.2 8.5
2003 6.9 3.6 8.1
2004 7.0 3.5 8.4
2005 7.0 4.3 8.0
2006 7.3 3.9 8.6

(Source: National Center for Health Statistics, National Health Interview Survey, 1997-2006.)

My interpretation of this trend between 1997 and 2006 is that the overall prevalence rate is fairly stable. The same goes for the rate stratified by age: the prevalence for both infants and pre-schoolers and for 5-17 year olds seems pretty stable. I’ve provided tables with the actual prevalence rates, instead of a chart, so you can make your own charts, argue with me, and argue with each other to your heart’s content. (If you’re interested in statistical significance: For American’s Children NHIS staff did significance tests for difference between years for the period 1997 to 2005. No significance differences were found between years (p > 0.05). This means that differences between 1996 and ‘97, ‘97 and ‘98, etc. were all nonsignificant — adjacent years, in other words. As far as I know, no other significance tests have been done, e.g., for non-adjacent years or for trends.)

So my conclusions about the data on time trends in activity limitation due to chronic illness are: first, based on the 1984 paper by Newacheck et al. there was probably nearly a doubling of the prevalence rate between the early 1960’s and the early 1980’s. Second. based, on the data shown above, I think the prevalence rate has remained fairly stable between the mid-1980’s and 2006.

The ramifications of this are extremely important, especially regarding children of the 1990’s.

1. Children who were born and who grew up in the 1990’s aren’t any “sicker” than the previous generation, at least using this particular overall measure of chronic illness. The same thing seems to be true about children born during this decade — at least so far. I cannot disagree with the JAMA Commentary’s statement that chidhood asthma prevalence has doubled since the 1980’s. And there’s no doubt that childhood obesity has more than tripled since the early 1970’s. The JAMA commentary also points out:, “Approximately 6% of school-age children have a reported diagnosis of attention-deficit/hyperactivity disorder (ADHD), which represents a dramatic increase, although changes in diagnostic practices are clearly one reason. For example, there was no entry for ADHD in the American Psychiatric Association manual until 1968…Similar questions arise for autism spectrum disorders; whether or not there have been true changes in prevalence, it is clear that rates of diagnosis have increased.” (The JAMA commentary also cites a great review article entitled, “Is there an epidemic of child or adolescent depression?”, which I highly recommend. The simple answer is NO, there not an epidemic of child or adolescent depression.)

2. The focus of the National Health Interview Survey is obviously health, health care, and illness. The focus is not on neurodevelopmental disabilities such as ADHD, autism, and so forth, even though there definitely are questions that pertain to these health issues. I think that it’s extremely important that in this context, when parents are questioned about limitation of activity, NHIS data do not show a rising trend in line with the marked increase in autism and ADHD — or in the diagnoses of autism and ADHD.

Perhaps I’m making too much of this last point. After all, the reported prevalence of autism is still only 6.6 per 1,000, which is 0.66%. On the other hand, to estimate rates of parent-reported ADHD diagnosis, the CDC analyzed data from the 2003 National Survey of Children’s Health and reported that 7.8% (95% confidence interval 7.4-8.1) of U.S. children aged 4 to 17 years had had ADHD diagnosed at some point. However, according to some additional tabulations in Health, United States, 2007, of the 8.2% of 5 to 17 year old children whose parent reported them to have activity limitation resulting from a chronic health condition in 2004-2005 (see Table 3 above), in about 25% at least one of the chronic health conditions causing the activity limitation was reported to be ADHD. In other words, according to recent NHIS data the prevalence of ADHD that results in activity limitation is about 2% in 5-17 year olds. Contrast this with the overall ADHD prevalence from the CDC study just mentioned above.

Anyway, I stand by my major point above, that the kids growing up today — those born in the 1990’s and in this century — are not a “sickly” generation.

So I’ll leave you with the profound words of Pete Townshend, written in 1965 (and perhaps you can ponder how I wasted my wild, impetuous youth):

I don’t mind other guys dancing with my girl
That’s fine, I know them all pretty well
But I know sometimes I must get out in the light
Better leave her behind with the kids, they’re alright
The kids are alright!

*Note: I’ve shortened the series of questions to make them more readable. For the exact questions, skip patterns, etc., you can see the questionnaire at ftp://ftp.cdc.gov/pub/Health_Statistics/NCHS/Survey_Questionnaires/NHIS/2006/English/QFAMILY.pdf

Sphere: Related Content

Get a load of this. David Kirby has rewritten and re-posted his story from Friday (June 20). The new June 21st story is entitled, CDC: Vaccine Study Used Flawed Methods. It starts with the following:

(NOTE: My original post on this topic mischaracterized the 2003 CDC vaccine investigation as an “Ecological Study,” which it was not. I am reposting this piece to reflect that information accurately, but also to point out that many of the weaknesses identified in the CDC’s data and methods apply to the published 2003 “retrospective cohort” study, as much as they do to any future “ecological” ones. I regret and apologize for the error.)

I hope I’m not getting a big ego, but I have a suspicion that Kirby read my post critiquing his story of yesterday, decided he had been confused about ecologic studies, and decided to create a new story. The damn problem is that Kirby’s new article is now even more confused and erroneous than the first one. The first sixteen paragraphs’s of Kirby’s new article are devoted to the Verstraeten et al. 2003 study that found no link between mercury in vaccines and autism, ADHD, speech delay or tics. Kirby claims that this study was a major issue in both the 2006 Report of the NIEHS Expert Panel and in the CDC Report responding to the NIEHS report.

So here’s some cutting and pasting from my previous post, to remind you of some of Kirby’s major misinterpretations of the NIEHS Report, expecially regarding the Verstraeten et al. 2003 study.

Nowhere in the 2006 report did the NIEHS panel conclude that the CDC’s 2003 thimerosal safety study was riddled with “several areas of weaknesses” that combined to “reduce the usefulness” of the study. In fact, in the NIEHS panel meeting that generated the 2006 report, the quality of the CDC’s 2003 thimerosal safety study was not even discussed. This can be seen clearly if you carefully read the NIEHS Report of the Expert Panel.

Earlier this week Epi Wonk had a long discussion with one of the Expert Panel Members (who adamantly insisted that he/she remain nameless), who confirmed three things for me:
(1) The purpose of the NIEHS Expert Panel was exactly as stated in the report:

It has been proposed that the Vaccine Safety Datalink could be used to look at the association between autistic disorder (AD) or autism spectrum disorders (ASD) by means of an ECOLOGIC ANALYSIS (emphasis mine) that would compare rates before and after the removal of thimerosal from most childhood vaccinations. To determine the feasibility and potential contribution and/or drawbacks of such a study, and to consider alternative study designs that could be conducted using the VSD database, the NIEHS convened a panel of experts…

(2) The quality of previous epidemiological studies of the association between thimerosal and autism was not discussed.
(3) The overall quality of the 2003 Verstraeten et al. study was not discussed. Indeed, in the section of the report in which the expert panel considered research panels other than ecologic analyses, which they did dismiss as riddled with several areas of weaknesses that combined to reduce the usefulness of ecologic studies, the expert panel “…recommended that further consideration be given to conducting an extension of the Verstraten study that would include additional years for follow up, would add more managed care organizations and reexamine the criteria for exclusion of births and/or take a sensitivity analyses approach to examining the impact of various exclusion criteria.”

And we still have these discrepancies between the new story and the actual CDC Report:

KIRBY: …the NIEHS had criticized CDC for failing to account for other mercury exposures, including maternal sources from flu shots and immune globulin, as well as mercury in food and the environment. CDC acknowledges this concern and recognizes this limitation, the Gerberding reply says.
ACTUAL QUOTE FROM CDC REPORT: NIEHS Finding: Difficulty in estimating cumulative exposure of child to organic mercury: The panel expressed concern that VSD adminstrative data or medical charts would not be accurate in recording or estimating a childs total mercury exposure from sources other than vaccines, such as diet, air and water. CDC Response: CDC acknowledges this concern and recognizes this limitation. In addition to administrative data and medical chart review, CDC has employed parent interviews to identify total cumulative mercury exposure from sources other than vaccines, such as diet. Often, however, parent recall, for events several years in the past, poses limitations as well.

KIRBY: “The NIEHS also questioned why CDC investigators eliminated 25% of the study population for a variety of reasons, even though this represented, “a susceptible population whose removal from the analysis might unintentionally reduce the ability to detect an effect of thimerosal.” This strict entry criteria would likely lead to an “under-ascertainment” of autism cases, the NIEHS reported. Again, this would have been an issue in the Verstraeten data. “CDC concurs,” Gerberding wrote, again noting that VSD data are “not appropriate for studying this vaccine safety topic. The data are intended for administrative purposes and may not be predictive of the outcomes studied.”
FACT: The NIEHS Expert Panel did not “question why CDC investigators eliminated 25% of the study population.” On the contrary, when discussing potential alternative designs (other than ecological studies), another possibility that generated support by the panel was an expansion of the VSD study published by Verstraten et al. The availability of several additional years of VSD data was seen as an opportunity to provide a more powerful test of any potential association between thimerosal and AD/ASD and would enable reconsideration of some aspects of the original study design (e.g., exclusion criteria) It was unclear to the panel what effect exclusion of low birth weight infants and those with congenital or severe perinatal disorders or born to mothers with serious medical problems of pregnancy had on the results of the Verstraeten et al. study; an expanded future study in which sensitivity analyses both including and excluding children with perinatal problems was recommended. The quote that begins with CDC concurs has no bearing on the Verstraeten et al. study, as implied by Kirby. Gerberding is responding to an NIEHS Expert Panel point about case ascertainment. Here is the entire quote from the CDC report: CDC responds: “CDC concurs with the recommendation that broader ICD-9 codes should be considered. The weakness further emphasizes why an ecological design is not appropriate for studying this vaccine safety topic using the VSD. The VSD data are intended adminstrative purposes and may not be predictive of the outcome studied. Because the outcomes have not been validated and considering the sensitivity of this issue, any VSD study of vaccines and autism, including a broader list of ICD-9 codes, would require chart review.”

Kirby ends his new article with the following postscript:
“This revised piece does raise two new questions, I think:
1) If the VSD is not necessarily appropriate to help determine the effect of reducing mercury levels in vaccines, are taxpayers getting their money’s worth?
2) If studies done in Denmark, Sweden and California were also “ecological” in nature, are they subject to some of the same weaknesses and limitations?”

Epi Wonk Response:
1) Neither the NEIHS Report nor the CDC Report state anywhere that the VSD is not appropriate to help determine the effect of reducing mercury levels in vaccines.
The relevant summary statements are:
(A) The NIEHS panel identified several serious problems that were judged to reduce the usefulness of an ecologic study design using the VSD to address the potential association between thimerosal and the risk of AD/ASD.
(B) “CDC concurs”, Dr. Gerberding wrote, “that conducting an ecologic analysis using VSD administrative data to address potential associations between thimerosal exposure and risk of ASD is not useful.”
(C) The NIEHS “panel identified several major strengths of the VSD to be: its ability to detect infrequent, vaccine-related adverse events of modest size; the possibility to supplement the MCO administrative data with reviews of medical records, interviews with parents and children, and additional diagnostic assessments; and the availability of demographic information about the MCO members.”
(D) “CDC agrees with the panels assessment of the strengths of the VSD Project to evaluate vaccine safety concerns. The VSD is a unique public-private collaboration that provides a model for the study of patient safety concerns by using individual-level data. In addition, CDC recognizes the tremendous value of the VSD as a national resource of expertise in vaccine safety research.

2) The NIEHS Expert Panel recommended that ecologic studies should not be done using the U.S. Vaccine Safety Datalink. Are completely different types of data from Denmark, Sweden, and California on which ecological analyses have been done subject to some of the same weaknesses and limitations? The answer is NO, but I suppose I’ll have to do a an entire instructional post on this for Mr. Kirby’s benefit.

Sphere: Related Content

“Medical reporter” David Kirby has delivered a potentially explosive report to his unfortunate and misinformed minions at the Huffington Post, in which he shows a startling string of misunderstandings and complete lack of knowledge of basic epidemiologic design and methods. Furthermore, he writes that Dr. Julie Gerberding “admits to a startling string of errors in the design and methods used in the CDC’s landmark 2003 study that found no link between mercury in vaccines and autism, ADHD, speech delay or tics,” when, in fact, the CDC report admitted no such thing about the 2003 study.

Gerberding was responding to a 2006 Report of the Expert Panel on Thimerosal Exposure in Pediatric Vaccines: Feasibility of Studies Using the Vaccine Safety Datalink to the National Institute of Environmental Health Sciences (NIEHS). Nowhere in the 2006 report, however, did the NIEHS panel conclude that the CDC’s 2003 thimerosal safety study was riddled with “several areas of weaknesses” that combined to “reduce the usefulness” of the study. In fact, in the NIEHS panel meeting that generated the 2006 report, the quality of the CDC’s 2003 thimerosal safety study was not even discussed. This can be seen clearly if you carefully read the NIEHS Report of the Expert Panel.

In addition, earlier this week Epi Wonk had a long discussion with one of the Expert Panel Members (who adamantly insisted that he/she remain nameless), who confirmed three things for me:
(1) The purpose of the NIEHS Expert Panel was exactly as stated in the report:

“It has been proposed that the Vaccine Safety Datalink could be used to look at the association between autistic disorder (AD) or autism spectrum disorders (ASD) by means of an ECOLOGIC ANALYSIS (emphasis mine) that would compare rates before and after the removal of thimerosal from most childhood vaccinations. To determine the feasibility and potential contribution and/or drawbacks of such a study, and to consider alternative study designs that could be conducted using the VSD database, the NIEHS convened a panel of experts…

(2) The quality of previous epidemiological studies of the association between thimerosal and autism was not discussed.
(3) The overall quality of the 2003 Verstraeten et al. study was not discussed. Indeed, in the section of the report in which the expert panel considered research panels other than ecologic analyses, which they did dismiss as riddled with “several areas of weaknesses” that combined to “reduce the usefulness” of ecologic studies, the expert panel “…recommended that further consideration be given to conducting an extension of the Verstraten study that would include additional years for follow up, would add more managed care organizations and reexamine the criteria for exclusion of births and/or take a sensitivity analyses approach to examining the impact of various exclusion criteria.”

In the HuffPost story, David Kirby quotes Julie Gerberding as writing that her agency “does not plan to use” the Vaccine Data Safetylink (VSD) for any future “ecological studies” of autism. “In fact”, Kirby continues, “Gerberding’s report said, any continued use of the VSD for continued ecological studies of vaccines and autism ‘would be uninformative and completely misleading.’”

Well, yes, that’s what the CDC thinks about using the VSD for ecologic analyses. I couldn’t agree more. At this point I obviously need to step back and explain about ecologic analyses. Fortunately, I taught epidemiologic design and methods for about 35 years, I had some students almost as clueless as David Kirby, but I’m a patient teacher. Another interesting fact is that there has only been one ecologic study published using the VSD, and I’ve written extensively about the study on this blog. Guess what? It wasn’t done by the CDC, who knew better long before the 2006 NIEHS Expert Panel. I’m speaking of the infamous Young-Geier Autism Study. So let me paraphrase from my explanation of “ecologic” in my previous critique of that paper:

It seems that much of the confusion and difficulty in understanding the Young-Geier Autism paper comes from the use of the term ecological study or ecological design. In order to understand the concept of an ecological-level study, it’s best to first think of what is meant by an individual-level study. In an individual-level study the investigator has data on every variable for every participant in the study. This may sound ridiculously simple, but it needs careful explication here, because an ecological study is quite different. In an individual-level study of thimerosal containing vaccines (TCV) and neurodevelopmental disorders (ND), for each child in the study we would have vaccination history for that child, ND diagnosis or diagnoses (if diagnosed), exact age of diagnosis or diagnoses (if diagnosed), date of birth, gender, age when follow-up ended, and information on as many potential confounders as possible for that child: birth weight, gestational age at birth, socioeconomic status, etc., etc. and all of that data would be linked together for that individual. Thus, in an individual-level study, the individual is the unit of analysis. Both the 2003 Verstraeten et al study using the VSD and the Thompson et al VSD special study of Early thimerosal exposure and neuropsychological outcomes at 7 to 10 years were individual level studies.
In an ecological study, data is collected at the group-level, as opposed to the individual level. The group is the unit of analysis. In fact, it would probably be easier to think of the Young-Geier Autism study as a group-level study with a group-level design and a group-level analysis, rather than using the confusing term ecological. 
 
This post is getting a bit long, and I don’t want to bore you by dissecting every single one of Kirby’s false statements. But let’s pick out a few.
KIRBY: “The final NIEHS report was a serious and thoughtful critique of where the CDC went wrong in its design, conduct and analysis of the study. The NIEHS panel “identified several serious problems” with the CDC’s effort.
FACT: The final NIEHS report was a serious and thoughtful critique of “using the VSD to look at the association between autistic disorder (AD) or autism spectrum disorders (ASD) by means of an ecologic analysis that would compare rates before and after the removal of thimerosal from most childhood vaccinations, to determine the feasibility and potential contribution and/or drawbacks of such a study, and to consider alternative study designs that could be conducted using the VSD database.” The NIEHS panel “identified several serious problems that were judged to reduce the usefulness of an ecologic study design using the VSD to address the potential association between thimerosal and the risk of AD/ASD

 

KIRBY: “…the NIEHS had criticized CDC for failing to account for other mercury exposures, including maternal sources from flu shots and immune globulin, as well as mercury in food and the environment. ‘CDC acknowledges this concern and recognizes this limitation,’ the Gerberding reply says.”
ACTUAL QUOTE FROM CDC REPORT: “NIEHS Finding: Difficulty in estimating cumulative exposure of child to organic mercury: The panel expressed concern that VSD adminstrative data or medical charts would not be accurate in recording or estimating a child’s total mercury exposure from sources other than vaccines, such as diet, air and water. CDC Response: CDC acknowledges this concern and recognizes this limitation. In addition to administrative data and medical chart review, CDC has employed parent interviews to identify total cumulative mercury exposure from sources other than vaccines, such as diet. Often, however, parent recall, for events several years in the past, poses limitations as well.”

KIRBY: “The NIEHS also took CDC to task for eliminating 25% of the study population for a variety of reasons, even though this represented, ‘a susceptible population whose removal from the analysis might unintentionally reduce the ability to detect an effect of thimerosal.’ This strict entry criteria likely led to an ‘under-ascertainment’ of autism cases, the NIEHS reported. ‘CDC concurs,’ Gerberding wrote, again noting that its study design was ‘not appropriate for studying this vaccine safety topic. The data are intended for administrative purposes and may not be predictive of the outcomes studied.’
FACT: These four sentences are outright lies. The NIEHS Expert Panel never “took the CDC to task for eliminating 25% of the study population…” On the contrary, when discussing potential alternative designs (other than ecological studies), “another possibility that generated support by the panel was an expansion of the VSD study published by Verstraten et al. The availability of several additional years of VSD data was seen as an opportunity to provide a more powerful test of any potential association between thimerosal and AD/ASD and would enable reconsideration of some aspects of the original study design (e.g., exclusion criteria)” It was unclear to the panel what effect exclusion of low birth weight infants and those with congenital or severe perinatal disorders or born to mothers with serious medical problems of pregnancy had on the results of the Verstraeten et al. study; an expanded future study in which sensitivity analyses both including and excluding children with perinatal problems was recommended. The quote that begins with “CDC concurs” has no bearing on the Verstraeten et al. study, as implied by Kirby. Gerberding is responding to an NIEHS Expert Panel point about case ascertainment. Here is the entire quote from the CDC report: “CDC responds: ‘CDC concurs with the recommendation that broader ICD-9 codes should be considered. The weakness further emphasizes why an ecological design is not appropriate for studying this vaccine safety topic using the VSD. The VSD data are intended adminstrative purposes and may not be predictive of the outcome studied. Because the outcomes have not been validated and considering the sensitivity of this issue, any VSD study of vaccines and autism, including a broader list of ICD-9 codes, would require chart review.’”

KIRBY: “Another serious problem was that the HMOs changed the way they tracked and recorded autism diagnoses over time, including during the period when vaccine mercury levels were in decline. Such changes could ‘affect the observed rate of autism and could confound or distort trends in autism rates,’ the NIEHS warned. ‘CDC concurs,’ Dr. Gerberding wrote again, ‘that conducting an ecologic analysis using VSD administrative data to address potential associations between thimerosal exposure and risk of ASD is not useful.’
FACT: This is correct. Believe it or not, Mr. Kirby has got it right this one time. The charge of the NIEHS Expert Panel was to determine whether the VSD should be used to to do ecological studies. The expert panel concluded, “No.” The CDC concurs.

I’ll leave you with the most important summarizing quote of the CDC report:

NIEHS Finding: Strengths: The panel identified several major strengths of the VSD to be: its ability to detect infrequent, vaccine-related adverse events of modest size; the possibility to supplement the MCO administrative data with reviews of medical records, interviews with parents and children, and additional diagnostic assessments; and the availability of demographic information about the MCO members.

CDC Response: CDC agrees with the panel’s assessment of the strengths of the VSD Project to evaluate vaccine safety concerns. The VSD is a unique public-private collaboration that provides a model for the study of patient safety concerns by using individual-level data. In addition, CDC recognizes the tremendous value of the VSD as a national resource of expertise in vaccine safety research.

I can’t help but agree with Kirby’s recommendation, “I hope everyone will read these documents, including the recommendations to make the VSD better, and the CDC’s agreement with all of the suggestions.” As the waning weeks of Omnibus Autism testimony get underway, I can’t help but wonder if a little housecleaning might be going on at Huffington Post and other news outlets looking for real medical reporters, rather than outright liars.

 

Sphere: Related Content

Scientists behaving badly

In this week’s issue of Nature, there are three articles relevant to the above theme:
1. Sandra Titus of HHS’s Office of Research Integrity and two colleagues surveyed 2,212 researchers throughout the United States. Titus’s team found that almost 9% of the respondents in their survey, mainly biomedical scientists, had witnessed some form of scientific misconduct in the past three years, and that 37% of those incidents went unreported. Titus et al. outline a number of measures to address this situation, including better protection for whistleblowers, and promotion of a “zero tolerance” culture in which scientists have just as much responsibility to report others’ misconduct as they have for their own behavior.

2. There’s a news brief about a researcher suspended for falsifying data. Two of the scientist’s papers have been retracted, and the Office of Research Integrity barred her from receiving any US government grants for five years.

3. There’s an editorial, entitled “Solutions, Not Scapegoats,”, in which the editors of Nature argue that “the solution [to scientific misconduct] needs to be wide-ranging yet nuanced”

Believe it or not, anti-vaccinationists have already begun to grab onto these stories. If you fail to see the connection, here’s a direct quote from a post referring to the above three articles: “For some people, to vaccinate or not is an issue of trust. When government/pharma sponsored research is so obviously self-serving and unreliable, it is no wonder people have been shunning vaccinations.” I still think this is a non sequitor, but some members of the ant-vaccination crowd love to collect stories of research fraud (even if completely unrelated to vaccine research).

One of their favorites is the case of Anne Butkovitz, “ex-clinical study coordinator,…who [in 2005] pled guilty to falsifying case report forms, and has been debarred permanently by the FDA. The study was a multi-site pediatric study of a rotavirus vaccine and was sponsored by a… pharmaceutical company. ([Note that]…the drug company apparently did nothing wrong.). According to the study protocol, the clinical study coordinator at each site was supposed to contact subjects parents at specified intervals to determine whether any serious adverse events had occurred. At one of the sites, however, Butkovitz failed to contact parents but stated on case report forms that she had contacted them and that no serious adverse events had occurred. The pharmaceutical company sponsor reportedly disregarded data from her site.” (This quote is from the now defunct blog Regulatory Affairs of the Heart, which tried to report objectively on “drug regulatory affairs and FDA compliance” and was certainly not an anti-vaccination blog.)

Are these cases evidence that Modern Science is failing us, “research is unreliable,” and people shouldn’t put their trust in scientific research? I would argue just the opposite. If you’ll allow me a paragraph of intellectual digression (so I can enter this post in the Carnival of the Elitist Bastards), outside of Communist countries where Lysenkoism was practiced, Modern Science has been an “open society.” What do I mean when I say that Modern Science has been an “open society”? I think the concept was best summed up by Robert Merton, the “father of the sociology of science,” in what he described as the CUDOS set of scientific norms: Communalism, Universalism, Disinterestedness, and Organized Skepticism. Communalism is the common ownership of scientific discoveries, according to which scientists give up intellectual property rights in exchange for recognition and esteem. According to universalism , claims to truth are evaluated in terms of universal or impersonal criteria, and not on the basis of race, class, gender, religion, or nationality. According to disinterestedness, scientists are rewarded for acting in ways that outwardly appear to be selfless. By organized skepticism, Merton meant that all ideas must be tested and are subject to rigorous, structured community scrutiny. (Merton wrote in 1942. For a 2005 constructive critique of CUDOS, see The Public Value of Science, Or how to ensure that science really matters.)

But let’s get back to a few “evidence-based” arguments for why I think the Nature articles and the Anne Butkovitz case provide facts in favor of keeping our trust in Modern Science. Let me count the ways:
1. The study by Sandra Titus and colleagues was published in Nature, one of the two major general science journals. The results weren’t hushed up, nor were they uncovered as part of a world-wide conspiracy by a Freedom of Information Act request. The same is true of the news brief about the researcher who falsified data.
2, In the United States there is an Office of Research Integrity (ORI), which promotes integrity in biomedical and behavioral research. ORI “monitors institutional investigations of research misconduct and facilitates the responsible conduct of research through educational, preventive, and regulatory activities.”
3. In the case of Anne Butkovitz, she was permanently debarred by the FDA, and the pharmaceutical company sponsor disregarded data from her site.
4. Most scientists, especially those in supervisory positions, do try hard to create an environment where fraud and misconduct will occur very rarely, hopefully not at all. A fascinating example of this is the consulting firm, P. Below Consulting, which provides clinical research services for the pharmaceutical and medical device Industries. One of the activities they specialize in is helping investigators to avoid fraud and misconduct. I highly recommend taking a look at their web page on this subject. It includes Powerpoint presentations and links to several worthwhile references.

In sum: Researchers aren’t perfect. However, to lose all trust in science is going way too far. The Office of Scientific Integrity, the FDA, and others are clamping down on questionable research practices. Nature and other scientific journals are urging scientists to go even further in their vigilance.

Another hat tip to Steve D. There’s an anti-vaccination troll who visits his blog and who inspired this post.

Sphere: Related Content

On behalf of the Office of the Surgeon General and the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), we would like to extend an invitation to join us for the Surgeon General’s Conference on the Prevention of Preterm Birth via webcast on June 16 and June 17, 2008.
Preterm birth remains one of the most complicated and difficult to address research and public health issues in obstetrics and pediatrics. More than 12 percent of all babies born in the United States are born preterm, and this rate continues to rise.  

The purpose of the Surgeon General’s Conference on the Prevention of Preterm Birth is to:

 (1) Increase awareness of preterm birth in the United States;

 (2) Review key findings and reports issued by experts in the field; and  

 (3) Establish a national agenda for activities in both the public and private sectors to address this growing public health problem.  

The live webcast can be viewed starting at 8:00 am Eastern Standard Time on June 16, 2008.

 The conference organizers at NICHD have asked Epi Wonk to come out of retirement for two days to participate in one of the scientific workgroups at the meeting in North Bethesda.

Background Reading: (1) The Institute of Medicine report, Preterm Birth: Causes, Consequences, and Prevention. (2) RL Goldenberg et al. Epidemiology and causes of preterm birth. Lancet 2008; 371:75-84. (3) JD Iams et al. Primary, secondary, and tertiary interventions to reduce the morbidity and mortality of preterm birth. Lancet 2008; 371:164-175. (4) S Saigal & LW Doyle. An overview of mortality and sequelae of preterm birth from infancy to adulthood. Lancet 2008; 371:261-269.  There’s also quite a bit of information at the March of Dimes website.

Sphere: Related Content